Daily Tech Digest - January 28, 2025


Quote for the day:

“Winners are not afraid of losing. But losers are. Failure is part of the process of success. People who avoid failure also avoid success.” -- Robert T. Kiyosaki


How Long Does It Take Hackers to Crack Modern Hashing Algorithms?

Because hashing algorithms are one-way functions, the only method to compromise hashed passwords is through brute force techniques. Cyber attackers employ special hardware like GPUs and cracking software (e.g., Hashcat, L0phtcrack, John The Ripper) to execute brute force attacks at scale—typically millions or billions or combinations at a time. Even with these sophisticated purpose-built cracking tools, password cracking times can vary dramatically depending on the specific hashing algorithm used and password length/character combination. ... With readily available GPUs and cracking software, attackers can instantly crack numeric passwords of 13 characters or fewer secured by MD5's 128-bit hash; on the other hand, an 11-character password consisting of numbers, uppercase/lowercase characters, and symbols would take 26.5 thousand years. ... When used with long, complex passwords, SHA256 is nearly impenetrable using brute force methods— an 11 character SHA256 hashed password using numbers, upper/lowercase characters, and symbols takes 2052 years to crack using GPUs and cracking software. However, attackers can instantly crack nine character SHA256-hashed passwords consisting of only numeric or lowercase characters.


Sharply rising IT costs have CIOs threading the needle on innovation

“Within two years, it will be virtually impossible to buy a PC, tablet, laptop, or mobile phone without AI,” Lovelock says. “Whether you want it or not, you’re going to get it sold to you.” Vendors have begun to build AI into software as well, he says, and in many cases, charge customers for the additional functionality. IT consulting services will also add AI-based services to their portfolios. ... But the biggest expected price hikes are for cloud computing services, despite years of expectations that cloud prices wouldn’t increase significantly, Lovelock says. “For many years, CIOs were taught that in the cloud, either prices went down, or you got more functionality, and occasionally both, that the economies of scale accrue to the cloud providers and allow for at least stable prices, if not declines or functional expansion,” he says. “It wasn’t until post-COVID in the energy crisis, followed by staff cost increases, when that story turned around.” ... “Generative AI is no longer seen as a one-size-fits-all solution, and this shift is helping both solutions providers and businesses take a more practical approach,” he says. “We don’t see this as a sign of lower expectations but as a move toward responsible and targeted use of generative AI.”


US takes aim at healthcare cybersecurity with proposed HIPAA changes

The major update to the HIPAA security regulations also requires healthcare organizations to strengthen security incident response plans and procedures, carry out annual penetration tests and compliance audits, among other measures. Many of the proposals cover best practice enterprise security guidelines foundational to any mature cybersecurity program. ... Cybersecurity experts praised the shift to a risk-based approach covered by the security rule revamp, while some expressed concerns that the measures might tax the financial resources of smaller clinics and healthcare providers. “The security measures called for in the proposed rule update are proven to be effective and will mitigate many of the risks currently present in the poorly protected environments of many healthcare payers, providers, and brokers,” said Maurice Uenuma, VP & GM for the Americas and security strategist at data security firm Blancco. ... Uenuma added: “The challenge will be to implement these measures consistently at scale.” Trevor Dearing, director of critical infrastructure at enterprise security tools firm Illumio, praised the shift from prevention to resilience and the risk-based approach implicit in the rule changes, which he compared to the EU’s recently introduced DORA rules for financial sector organizations.


Risk resilience: Navigating the risks that board’s can’t ignore in 2025

The geopolitical landscape is more turbulent than ever. Companies will need to prepare for potential shocks like regional conflicts, supply chain disruptions, or even another pandemic. If geopolitical risks feel dizzyingly complex, scenario planning will be a powerful tool in mapping out different political and economic scenarios. By envisioning various outcomes, boards can better understand their vulnerabilities, prepare tailored responses and enhance risk resilience. To prepare for the year ahead, board and management teams should ask questions such as: How exposed are we to geopolitical risks in our supply chain? Are we engaging effectively with local governments in key regions?  ... The risks of 2025 are formidable, but so are the opportunities for those who lead with purpose. With informed leadership and collaboration, we can navigate the complexities of the modern business environment with confidence and resilience. Resilience will be the defining trait of successful boards and businesses in the years ahead. It requires not only addressing known risks but also preparing for the unexpected. By prioritising scenario planning, fostering a culture of transparency, and aligning risk management with strategic goals, boards can navigate uncertainty with confidence.


Freedom from Cyber Threats: An AI-powered Republic on the Rise

Developing a resilient AI-driven cybersecurity infrastructure requires substantial investment. The Indian government’s allocation of over ₹550 crores to AI research demonstrates its commitment to innovation and data security. Collaborations with leading cybersecurity companies exemplify scalable solutions to secure digital ecosystems, prioritising resilience, ethical governance, and comprehensive data protection. Research tools like the Gartner Magic Quadrant also offer reliable and useful insights into the leading companies that offer the best and latest SIEM technology solutions. Upskilling the workforce is equally important. Training programs focused on AI-specific cybersecurity skills are preparing India’s talent pool to tackle future challenges effectively. ... Proactive strategies are essential to counter the evolution of cyber threats. Simulation tools enable organizations to anticipate and neutralise potential vulnerabilities. Now, cybersecurity threats can be intercepted by high-class threat detection SIEM data clouds and autonomous threat sweeps. Advanced threat research, conducted by dedicated labs within organisations, plays a crucial role in uncovering emerging attack vectors and providing actionable insights to pre-empt potential breaches. 


Enterprises are hitting a 'speed limit' in deploying Gen AI - here's why

The regulatory issue, the report states, makes clear "respondents' unease about which use cases will be acceptable, and to what extent their organizations will be held accountable for Gen AI-related problems." ... The latest iteration was conducted in July through September, and received 2,773 responses from "senior leaders in their organizations and included board and C-suite members, and those at the president, vice president, and director level," from 14 countries, including the US, UK, Brazil, Germany, Japan, Singapore, and Australia, and across industries including energy, finance, healthcare, and media and telecom. ... Despite the slow pace, Deloitte's CTO is confident in the continued development, and ultimate deployment, of Gen AI. "GenAI and AI broadly is our reality -- it's not going away," writes Bawa. Gen AI is ultimately like the Internet, cloud computing, and mobile waves that preceded it, he asserts. Those "transformational opportunities weren't uncovered overnight," he says, "but as they became pervasive, they drove significant disruption to business and technology capabilities, and also triggered many new business models, new products and services, new partnerships, and new ways of working and countless other innovations that led to the next wave across industries."


NVMe-oF Substantially Reduces Data Access Latency

NVMe-oF is a network protocol that extends the parallel access and low latency features of Nonvolatile Memory Express (NVMe) protocol across networked storage. Originally designed for local storage and common in direct-attached storage (DAS) architectures, NVMe delivers high-speed data access and low latency by directly interfacing with solid-state disks. NVMe-oF allows these same advantages to be achieved in distributed and clustered environments by enabling external storage to perform as if it were local. ... Storage targets can be dynamically shared among workloads, thus providing composable storage resources that provide flexibility, agility and greater resource efficiency. The adoption of NVMe-oF is evident across industries where high performance, efficiency and low latency at scale are critical. Notable market sectors include: financial services, e-commerce, AI and machine learning, and specialty cloud service providers (CSPs). Legacy VM migration, real-time analytics, high-frequency trading, online transaction processing (OLTP) and the rapid development of cloud native, performance-intensive workloads at scale are use cases that have compelled organizations to modernize their data platforms with NVMe-oF solutions. Its ability to handle massive data flows with efficiency and high-performance makes it indispensable for I/O-intensive workloads.


The crisis of AI’s hidden costs

Let me paint you a picture of what keeps CFOs up at night. Imagine walking into a massive data center where 87% of the computers sit there, humming away, doing nothing. Sounds crazy, right? That’s exactly what’s happening in your cloud environment. If you manage a typical enterprise cloud computing operation, you are wasting money. It’s not rare to see companies spend $1 million monthly on cloud resources, with 75% to 80% of that amount going right out the window. It’s no mystery what this means for your bottom line. ... Smart enterprises aren’t just hoping the problem will disappear; they’re taking action. Here’s my advice: Don’t rely solely on the basic tools offered by your cloud provider; they won’t give you the immediate cost visibility you need. Instead, invest in third-party solutions that provide a clear, up-to-the-minute picture of your resource utilization. Focus on power-hungry GPUs running AI workloads. ... Rather than spinning up more instances, consider rightsizing. Modern instance types offered by public cloud providers can give you more bang for your buck. ... Predictive analytics can help you scale up or down based on demand, ensuring you’re not paying for idle resources. ... Be strategic and look at the bigger picture. Evaluate reserved instances and savings plans to balance cost and performance. 


AI security posture management will be needed before agentic AI takes hold

We’ve run into these issues when most companies shifted their workloads to the cloud. Authentication issues – like the dreaded S3 bucket that had a default public setting and that was the cause of way too many breaches before it was secure by default – became the domain of cloud security posture management (CSPM) tools before they were swallowed up by the CNAPP acronym. Identity and permission issues (or entitlements, if you prefer) became the alphabet soup of CIEM (cloud identity entitlement management), thankfully now also under the umbrella of CNAPP. AI bots will need to be monitored by similar toolsets, but they don’t exist yet. I’ll go out on a limb and suggest SAFAI (pronounced Sah-fy) as an acronym: Security Assessment Frameworks for AI. These would, much like CNAPP tools, embed themselves in agentless or transparent fashion, crawl through your AI bots collecting configuration, authentication and permission issues and highlight the pain points. You’d still need the standard panoply of other tools to protect you, since they sit atop the same infrastructure. And that’s on top of worrying about prompt injection opportunities, which is something you unfortunately have no control over as they are based entirely on the models and how they are used.


Hackers Use Malicious PDFs, pose as USPS in Mobile Phishing Scam

The bad actors make the malicious PDFs look like communications from the USPS that are sent via SMS text messages and use what the researchers called in a report Monday a “never-before-seen means of obfuscation” to help them bypass traditional security controls. They embed the malicious links in the PDF, essentially hiding them from endpoint security solutions. ... The phishing attacks are part of a larger and growing trend of what Zimperium calls “mishing,” an umbrella word for campaigns that use email, text messages, voice calls, or QR codes that exploit such weaknesses as unsafe user behavior and minimal security on many mobile devices to infiltrate corporate networks and steal information. ... “We’re witnessing phishing evolve in real time beyond email into a sophisticated multi-channel threat, with attackers leveraging trusted brands like USPS, Royal Mail, La Poste, Deutsche Post, and Australian Post to exploit limited mobile device security worldwide,” Kowski said. “The discovery of over 20 malicious PDFs and 630 phishing pages targeting organizations across 50+ countries shows how threat actors capitalize on users’ trust in official-looking communications on mobile devices.” He also noted that internal disagreements are hampering corporations’ ability to protect against such attacks.


Daily Tech Digest - January 27, 2025


Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous


Revolutionizing Investigations: The Impact of AI in Digital Forensics

One of the most significant challenges in modern digital forensics, both in the corporate sector and law enforcement, is the abundance of data. Due to increasing digital storage capacities, even mobile devices today can accumulate up to 1TB of information. ... Digital forensics started benefiting from AI features a few years ago. The first major development in this regard was the implementation of neural networks for picture recognition and categorization. This powerful tool has been instrumental for forensic examiners in law enforcement, enabling them to analyze pictures from CCTV and seized devices more efficiently. It significantly accelerated the identification of persons of interest and child abuse victims as well as the detection of case-related content, such as firearms or pornography. ... No matter how advanced, AI operates within the boundaries of its training, which can sometimes be incomplete or imperfect. Large language models, in particular, may produce inaccurate information if their training data lacks sufficient detail on a given topic. As a result, investigations involving AI technologies require human oversight. In DFIR, validating discovered evidence is standard practice. It is common to use multiple digital forensics tools to verify extracted data and manually check critical details in source files. 


Is banning ransomware payments key to fighting cybercrime?

Implementing a payment ban is not without challenges. In the short term, retaliatory attacks are a real possibility as cybercriminals attempt to undermine the policy. However, given the prevalence of targets worldwide, I believe most criminal gangs will simply focus their efforts elsewhere. The government’s resolve would certainly be tested if payment of a ransom was seen as the only way to avoid public health data being leaked, energy networks being crippled, or preventing a CNI organization from going out of business. In such cases, clear guidelines as well as technical and financial support mechanisms for affected organizations are essential. Policy makers must develop playbooks for such scenarios and run education campaigns that raise awareness about the policy’s goals, emphasizing the long-term benefits of standing firm against ransom demands. That said, increased resilience—both technological and organizational—are integral to any strategy. Enhanced cybersecurity measures are critical, in particular a zero trust strategy that reduces an organization’s attack surface and stops hackers from being able to move laterally in the network. The U.S. federal government has already committed to move to zero trust architectures.


Building a Data-Driven Culture: Four Key Elements

Why is building a data-driven culture incredibly hard? Because it calls for a behavioral change across the organization. This work is neither easy nor quick. To better appreciate the scope of this challenge, let’s do a brief thought exercise. Take a moment to reflect on these questions: How involved are your leaders in championing and directly following through on data-driven initiatives? Do you know whether your internal stakeholders are all equipped and empowered to use data for all kinds of decisions, strategic or tactical? Does your work environment make it easy for people to come together, collaborate with data, and support one another when they’re making decisions based on the insights? Does everyone in the organization truly understand the benefits of using data, and are success stories regularly shared internally to inspire people to action? If your answers to these questions are “I’m not sure” or “maybe,” you’re not alone. Most leaders assume in good faith that their organizations are on the right path. But they struggle when asked for concrete examples or data-backed evidence to support these gut-feeling assumptions. The leaders’ dilemma becomes even more clear when you consider that the elements at the core of the four questions above — leadership intervention, data empowerment, collaboration, and value realization — are inherently qualitative. Most organizational metrics or operational KPIs don’t capture them today. 


How CIOs Should Prepare for Product-Led Paradigm Shift

Scaling product centricity in an organization is like walking a tightrope. Leaders must drive change while maintaining smooth operations. This requires forming cross-functional teams, outcome-based evaluation and navigating multiple operating models. As a CIO, balancing change while facing the internal resistance of a risk-averse, siloed business culture can feel like facing a strong wind on a high wire. ...The key to overcoming this is to demonstrate the benefits of a product-centric approach incrementally, proving its value until it becomes the norm. To prevent cultural resistance from derailing your vision for a more agile enterprise, leverage multiple IT operating models with a service or value orientation to meet the ambitious expectations of CEOs and boards. Engage the C-suite by taking a holistic view of how democratized IT can be used to meet stakeholder expectations. Every organization has a business and enterprise operating model to create and deliver value. A business model might focus on manufacturing products that delight customers, requiring the IT operating model to align with enterprise expectations. This alignment involves deciding whether IT will merely provide enabling services or actively partner in delivering external products and services.


CISOs gain greater influence in corporate boardrooms

"As the role of the CISO grows more complex and critical to organisations, CISOs must be able to balance security needs with business goals, culture, and articulate the value of security investments." She highlights the importance of strong relationships across departments and stakeholders in bolstering cybersecurity and privacy programmes. The study further discusses the positive impact of having board members with a cybersecurity background. These members foster stronger relationships with security teams and have more confidence in their organisation's security stance. For instance, boards with a CISO member report higher effectiveness in setting strategic cybersecurity goals and communicating progress, compared to boards without such expertise. CISOs with robust board relationships report improved collaboration with IT operations and engineering, allowing them to explore advanced technologies like generative AI for enhanced threat detection and response. However, gaps persist in priority alignment between CISOs and boards, particularly around emerging technologies, upskilling, and revenue growth. Expectations for CISOs to develop leadership skills add complexity to their role, with many recognising a gap in business acumen, emotional intelligence, and communication. 


Researchers claim Linux kernel tweak could reduce data center energy use by 30%

Researchers at the University of Waterloo's Cheriton School of Computer Science, led by Professor Martin Karsten and including Peter Cai, identified inefficiencies in network traffic processing for communications-heavy server applications. Their solution, which involves rearranging operations within the Linux networking stack, has shown improvements in both performance and energy efficiency. The modification, presented at an industry conference, increases throughput by up to 45 percent in certain situations without compromising tail latency. Professor Karsten likened the improvement to optimizing a manufacturing plant's pipeline, resulting in more efficient use of data center CPU caches. Professor Karsten collaborated with Joe Damato, a distinguished engineer at Fastly, to develop a non-intrusive kernel change consisting of just 30 lines of code. This small but impactful modification has the potential to reduce energy consumption in critical data center operations by as much as 30 percent. Central to this innovation is a feature called IRQ (interrupt request) suspension, which balances CPU power usage with efficient data processing. By reducing unnecessary CPU interruptions during high-traffic periods, the feature enhances network performance while maintaining low latency during quieter times.


GitHub Desktop Vulnerability Risks Credential Leaks via Malicious Remote URLs

While the credential helper is designed to return a message containing the credentials that are separated by the newline control character ("\n"), the research found that GitHub Desktop is susceptible to a case of carriage return ("\r") smuggling whereby injecting the character into a crafted URL can leak the credentials to an attacker-controlled host. "Using a maliciously crafted URL it's possible to cause the credential request coming from Git to be misinterpreted by Github Desktop such that it will send credentials for a different host than the host that Git is currently communicating with thereby allowing for secret exfiltration," GitHub said in an advisory. A similar weakness has also been identified in the Git Credential Manager NuGet package, allowing for credentials to be exposed to an unrelated host. ... "While both enterprise-related variables are not common, the CODESPACES environment variable is always set to true when running on GitHub Codespaces," Ry0taK said. "So, cloning a malicious repository on GitHub Codespaces using GitHub CLI will always leak the access token to the attacker's hosts." ... In response to the disclosures, the credential leakage stemming from carriage return smuggling has been treated by the Git project as a standalone vulnerability (CVE-2024-52006, CVSS score: 2.1) and addressed in version v2.48.1.


The No-Code Dream: How to Build Solutions Your Customer Truly Needs

What's excellent about no-code is that you can build a platform that won't require your customers to be development professionals — but will allow customization. That's the best approach: create a blank canvas for people, and they will take it from there. Whether it's surveys, invoices, employee records, or something completely different, developers have the tools to make it visually appealing to your customers, making it more intuitive for them. I also want to break the myth that no code doesn't allow effective data management. It is possible to create a no-code platform that will empower users to perform complex mathematical operations seamlessly and to support managing interrelated data. This means users' applications will be more robust than their competitors and produce more meaningful insights. ... As a developer, I am passionate about evolving tech and our industry's challenges. I am also highly aware of people's concerns over the security of many no-code solutions. Security is a critical component of any software; no-code solutions are no exception. One-off custom software builds do not typically undergo the same rigorous security testing as widely used commercial software due to the high cost and time involved. This leaves them vulnerable to security breaches.


Digital Operations at Turning Point as Security and Skills Concerns Mount

The development of appropriate skills and capabilities has emerged as a critical challenge, ranking as a pressing concern in advancing digital operations. The talent shortage is most acute in North America and the media industry, where fierce competition for skilled professionals coincides with accelerating digital transformation initiatives. Organizations face a dual challenge: upskilling existing staff while competing for scarce talent in an increasingly competitive market. The report suggests this skills gap could potentially slow the adoption of new technologies and hamper operational advancement if not adequately addressed. "The rapid evolution of how AI is being applied to many parts of jobs to be done is unmatched," Armandpour said. "Raising awareness, educating, and fostering a rich learning environment for all employees is essential." ... "Service outages today can have a much greater impact due to the interdependencies of modern IT architectures, so security is especially critical," Armandpour said. "Organizations need to recognize security as a critical business imperative that helps power operational resilience, customer trust, and competitive advantage." What sets successful organizations apart is the prioritization of defining robust security requirements upfront and incorporating security-by-design into product development cycles. 


Is ChatGPT making us stupid?

In fact, one big risk right now is how dependent developers are becoming on LLMs to do their thinking for them. I’ve argued that LLMs help senior developers more than junior developers, precisely because more experienced developers know when an LLM-driven coding assistant is getting things wrong. They use the LLM to speed up development without abdicating responsibility for that development. Junior developers can be more prone to trusting LLM output too much and don’t know when they’re being given good code or bad. Even for experienced engineers, however, there’s a risk of entrusting the LLM to do too much. For example, Mike Loukides of O’Reilly Media went through their learning platform data and found developers show “less interest in learning about programming languages,” perhaps because developers may be too “willing to let AI ‘learn’ the details of languages and libraries for them.” He continues, “If someone is using AI to avoid learning the hard concepts—like solving a problem by dividing it into smaller pieces (like quicksort)—they are shortchanging themselves.” Short-term thinking can yield long-term problems. As noted above, more experienced developers can use LLMs more effectively because of experience. If a developer offloads learning for quick-fix code completion at the long-term cost of understanding their code, that’s a gift that will keep on taking.

Daily Tech Digest - January 26, 2025


Quote for the day:

“If you don’t try at anything, you can’t fail… it takes back bone to lead the life you want” -- Richard Yates

Here’s Why Physical AI Is Rapidly Gaining Ground And Lauded As The Next AI Big Breakthrough

If we are going to connect generative AI to all kinds of robots and other machines that are wandering around in our homes, offices, factories, streets, and the like, we ought to expect that the AI will do so properly, safely, and with aplomb. Can an AI that only has text-based data training adequately control and direct those real-world machines as they mix among people? Some assert that this is a highly dangerous concern. The generative AI uses ostensibly book learning to guess what will happen when a robot is instructed by the AI to lift a chair or hold aloft a dog. Is that good enough to cope with the myriad of aspects that can go wrong? Perhaps the AI will by text-basis logic assume that if the dog is dropped, it will bounce like a rubber ball. Ouch, the dog might not be amused. ... AI researchers are scurrying to craft Physical AI. The future depends on this capability. Machines and robots are going to be built and shipped to work side-by-side with humans. Physical AI will be the make-or-break of whether those mechanizations are compatible with humans and operate properly in the real world or instead are endangering and harmful.


Why workload repatriation must be part of true multi-cloud strategies

Repatriation can provide benefits such as cost optimization and enhanced control, but it also introduces significant challenges. Key obstacles organizations encounter during cloud repatriation include the absence of cloud-native services, limited access to provider-managed applications, the need for highly skilled professionals, and potentially substantial capital expenditures required for building or upgrading on-premises infrastructure. Migrating workloads back on-premises often results in the development of hybrid environments or, in cases where multiple public cloud providers are used, multi-cloud environments. This shift adds complexity to managing IT infrastructure, requiring greater coordination and expertise. In public cloud environments, providers offer a wide array of managed services, automated management, and orchestration capabilities that simplify operations and reduce the burden on IT teams. When repatriating workloads, organizations must find alternatives or develop in-house solutions to replicate these functionalities. This can be time-consuming, costly, and may result in reduced capabilities compared to cloud-native offerings. As such, organizations must carefully balance the trade-offs between the advanced capabilities of cloud-native solutions and the control offered by on-premises environments. 


3 hidden benefits of Dedicated Internet Access for enterprises

DIA is designed to support bandwidth-heavy tasks such as cloud-based applications and video conferencing. It ensures seamless connectivity, helping streamline operations and prevent performance issues. Routine activities like large file sharing, backups, and data transfers are completed more efficiently, while internal communication across multiple business locations becomes smoother and more reliable. Think of DIA as your business’s private Internet highway. Unlike shared connections, it provides uninterrupted service, essential for maintaining optimal workflows and boosting productivity. For companies that rely on consistent and high-performance Internet access, DIA offers a dependable solution tailored to meet these demands. ... Fast website loading times and smooth online transactions are essential for satisfying customers. DIA helps businesses deliver a premium online experience, which can significantly improve customer loyalty. This reliable performance extends to all business locations, including branch offices. With DIA, businesses can ensure consistent, high-quality interactions with their customers—whether accessing resources or reaching out through support channels. Additionally, DIA enhances customer support by ensuring messaging services remain continuously available, allowing businesses to respond quickly and efficiently to customer needs.


Data engineering - Pryon: Turning chaos into clarity

Data Engineering is the discipline that takes raw, unstructured data and transforms it into actionable, high-value insights. Without a strong data foundation, the $10M average that 1 in 3 enterprises are spending on AI projects next year alone, are setting themselves up for failure. As data creation accelerates – 90% of the world’s data has been generated in the last two years – engineers are tasked with more than just managing it. They have to structure, organise and operationalise data so it can actually be useful and produce the right outputs. From building reliable pipelines to ensuring data quality, engineering teams play the central role in making systems that actually solve problems. ... Data synthesis is interesting, but taking action is paramount. The final step is putting it to work. Whether that means automating workflows, making real-time decisions, or delivering predictive insights, this is where the rubber meets the road. Agentic orchestration can enable systems to take the synthesised insights and act on them autonomously or with minimal human input. These engines bridge the gap between theory and practice, ensuring that your data doesn’t just sit idle – it drives measurable outcomes.


Leading with purpose: Insights from the Bhagavad Gita for modern managers

In a professional setting, the ability to manage emotions is crucial for success. A manager or individual who seeks gratification of ego and cannot regulate their emotions is likely to face challenges in achieving results. Actions driven by a sense of false ego can lead to conflicts, and misunderstandings, and ultimately hinder productivity. Such individuals may react impulsively rather than thoughtfully, allowing their emotions to cloud their judgment. When individuals learn to regulate their emotions and act from a place of calmness rather than chaos, they not only enhance their performance but also uplift those around them. A Sattvic approach to work fosters collaboration, creativity, and a shared sense of purpose. Conversely, when actions are driven by ego or excessive ambition (Tamsik), they often lead to stress and burnout. By embodying the teachings of the Gita—performing duties with dedication while remaining unattached to outcomes—individuals can achieve true mastery over their emotions. This mastery not only paves the way for personal success but also cultivates an environment where everyone can thrive together. While the entire Bhagavad Gita is replete with invaluable life lessons, these two shlokas stand out as particularly essential for effective management in the workplace. 


Accelerating HCM Cloud Implementation With RPA

Robotic Process Automation (RPA) provides a practical solution to streamline these processes. ... Many cloud platforms require Multi-Factor Authentication (MFA), which disrupts standard login routines for bots. However, we have addressed this by programmatically enabling RPA bots to handle MFA through integration with SMS or email-based OTP services. This allows seamless automation of login processes, even with additional security layers. ... It’s essential that users are assigned the correct authorizations in an HCM cloud, with ongoing maintenance of these permissions as individuals transition within the organization. Even with a well-defined scheme in place, it’s easy for someone to be shifted into a role that they shouldn’t hold. To address this challenge, we have leveraged RPA to automate the assignment of roles, ensuring adherence to least-privilege access models. ... Integrating with HCM systems through APIs often involves navigating rate limits that can disrupt workflows. To address this challenge, we implemented robust retry logic within our RPA bots, utilizing exponential backoff to gracefully handle API rate limit errors. This approach not only minimizes disruptions but also ensures that critical operations continue smoothly.


MDM and genAI: A match made in Heaven — or something less?

Despite its promising potential, AIoT faces several hurdles. One major challenge is interoperability. Many companies use IIoT devices and platforms from different manufacturers, which are not always seamlessly compatible. This complicates the implementation of integrated AIoT solutions and necessitates standardised interfaces and protocols. IIoT platforms such as Cumulocity can integrate various services and devices. A well-chosen platform facilitates the integration of new devices, enables easy scaling, and supports the flexible adaptation of an IIoT strategy. It also allows integration with other systems and technologies, such as ERP or CRM systems, thereby embedding IIoT technologies into existing business processes. Moreover, robust platforms offer specialised security features to protect connected devices from potential cybercriminal attacks. Another critical aspect is data preparation. In IoT environments, data quality is often poorer than businesses assume. Applying AI to inadequately prepared data produces subpar models that fail to deliver expected results. ... A further challenge is the skills shortage. Developing and implementing AIoT systems requires expertise in fields such as data analysis, machine learning, and cybersecurity. The demand for skilled professionals exceeds current supply, prompting companies to invest in training and development programmes.


Enterprise Architecture and Complexity

Complex architectures are characterised by attributes that make it challenging to manage using traditional project or program management methods. These architectures often have many layers, interconnected parts, variables, and dynamics that are not immediately apparent or easily understood. Complex architectures are also unpredictable (Theiss 2023)2 due to the communication and interaction required across and between the components. Managing an architecture build and deployment requires both broad and deep understanding of the interdependencies, interactions, and inherent constraints. As increasing levels of automation are deployed at scale, greater visibility and transparency is needed to understand not only the technologies and applications in play, but also the intended and unintended consequences and behaviour that they generate. Architectural artefacts and systems documentation (even if up to date) typically show elements such as nested operational processes as simple, generalised linkages and design patterns which results in greater levels of ambiguity, not clarity. They only allow us to understand in part. As systems architectures become more complex in build, capability and scope, enhanced sense-making capabilities are needed to navigate components, to ensure a coherent, adaptive systems design. 


Misinformation Is No. 1 Global Risk, Cyberespionage in Top 5

Misinformation campaigns in the form of deepfakes, synthetic voice recordings or fabricated news stories are now a leading mechanism for foreign entities to influence "voter intentions, sow doubt among the general public about what is happening in conflict zones, or tarnish the image of products or services from another country." This is especially acute in India, Germany, Brazil and the United States. Concern remains especially high following a year of the so-called "super elections," which saw heightened state-sponsored campaigns designed to manipulate public opinion.  ... Despite growing concerns, cyber resilience continues to be inadequate especially among small and mid-sized organizations, according to the report's findings. Thirty-five percent of small organizations believe their cyber resilience is inadequate, up from 5% in 2022. Many of these organizations lack the resources to invest in advanced cybersecurity measures, leaving them increasingly vulnerable to ransomware, phishing and other attacks. Seventy-one percent of cyber leaders say small organizations have already reached a "tipping point where they can no longer adequately secure themselves against the growing complexity of cyber risks." ... On one hand, AI-powered systems are proving invaluable in identifying threats, automating responses and analyzing vast amounts of data in real time.


Cloud repatriation – how to balance repatriation effectively and securely

Regardless of the reasons for making the move away from public cloud, the road to repatriation can be complex to navigate. Whether it is technical or talent issues, financial costs or compliance challenges, businesses making the switch should be prepared to spend time planning and executing an effective strategy. Within this strategy there are three areas that require special attention: observability, compliance and employing a holistic tech stack strategy. Observability is crucial in cloud repatriation because in order to move data and applications in-house, a business must understand them and how they are being used. It is only then you can ensure a smooth and effective transition. For example, there might be Shadow IT or AI that is being used by employees to get around IT policy and help them to get their work done faster. Sometimes these technologies will store data on a cloud service, so businesses need to be aware of them before making the switch. By leveraging observability, organizations can mitigate risks, optimize their infrastructure, and achieve successful repatriation that meets their strategic objectives. Compliance is also important as it is a major focus area for European and UK regulators with new and emerging regulations like DORA and NIS2 coming to the fore.


Daily Tech Digest - January 25, 2025


Quote for the day:

“You live longer once you realize that any time spent being unhappy is wasted.” -- Ruth E. Renkl


How to Prepare for Life After NB-IoT

Last November, the IoT world was caught off guard by AT&T’s announcement to discontinue its support for Narrowband IoT (NB-IoT) by Q1 2025. For many, this came as a big surprise. NB-IoT was considered the prodigy technology, promising low power, long-range, and low-cost connectivity. While NB-IoT never reached mass adoption in the US, this decision still struck as a blow for the ones who did invest in the technology, and raised concerns about its validity among people outside the US. ... Fortunately, most IoT modules supporting NB-IoT also support LTE-M. Modules typically select the optimal network and technology based on signal quality and internal radio settings. Devices with roaming enabled will automatically switch networks or technologies if the primary connection fails. Once AT&T shuts down its network, your devices will automatically switch to another technology or network if set up correctly. However, rather than waiting for the network to become unavailable, you may want to stay in control of transitioning to another technology. This also allows you to test the process with a subset of devices before rolling out updates to your entire fleet. Assuming your cellular modules support LTE-M, and you have remote access to update configurations, you can update the radio access technology (RAT) using a simple AT Command. 
Creating efficient, relevant, and lasting regulations requires several key factors. First and foremost, policymakers need a working definition of the object of their laws, which requires thorough work to capture the essence of what will be affected by their text. This is a challenging task in the case of AI because its definition remains in flux as the technology evolves. ... unprecedented surge in generative AI’s popularity created uncertainty for policymakers about how to navigate the new landscape. There was an urgent need for frameworks, definitions, and language to fully understand the impact of this technology and how to frame it. As the technology outpaced expectations, earlier regulatory efforts to address these tools quickly became inadequate and obsolete, leaving policymakers scrambling to catch up. This is precisely the situation Chinese regulators faced in their initial efforts to address the generative AI sector. The basic provisions outlined in the law were insufficient to address the profound societal impacts of generative AI’s widespread adoption. The attempt to establish China as an early player in AI regulation was overtaken by the pace of technological progress and private-sector innovation, rendering even the terminology obsolete.


How to Simplify Automated Security Testing in CI/CD Pipelines

Dependency management is where many teams stumble, and we’ve all seen the fallout from poorly managed libraries (hello, Log4Shell). Automating dependency checks is not an option, it’s a must. Tools like Dependabot, OWASP Dependency-Check, and Renovate take the grunt work out of monitoring for vulnerabilities, raising alerts, and even creating pull requests to fix issues. Imagine a Node.js team drowning in a sea of outdated packages. With Dependabot hooked into their GitHub workflow, every vulnerability gets an automatic pull request to update to a safe version. No manual labor, no guessing games—just a steady rhythm of secure, up-to-date code. Go deeper by leveraging Software Composition Analysis (SCA) tools that don’t just look at direct dependencies but dive into the murky waters of transitive dependencies too.  ... Instead of vague warnings like “Potential SQL Injection Found,” imagine getting, “SQL Injection vulnerability detected at line 45 in user_controller.js. Here’s how to fix it…” Tools like CodeQL and Semgrep do precisely this. They integrate directly into CI pipelines, flag issues, suggest fixes, and provide links to further reading, all without overwhelming the dev team.


Security automation and integration can smooth AppSec friction

Whether you perceive friction between development and security testing to be an impediment or not often depends on your role in the organization. Of the AppSec team members who responded to the survey used for “Global State of DevSecOps” report, 65% felt that that testing impeded pipelines “moderately” or “severely.” While the report didn’t survey why they feel this way, we can speculate that it’s due to their proximity to the testing process, or potentially because they’re feeling pressure to accelerate review processes. Since they are closest to the task, they face the highest scrutiny for its efficiency. Of the development and engineering team members who replied to the survey, 58% share the sentiment of their AppSec counterparts. It is, however, important to consider that an additional 12% of the surveyed developers and engineers report that they just don’t have enough visibility into security testing to know what’s going on. Were they to have greater visibility into security testing processes, it is quite possible that they, too, would perceive AppSec testing as an impediment to pipelines. And this lack of visibility makes concerted DevSecOps initiatives more difficult to implement since contributors are unable to close feedback loops or optimize development and testing efforts.


The Power of Many: Crowdsourcing as A Game-Changer for Modern Cyber Defense

Although shared expertise significantly boosts threat detection & hunting efficiency while simultaneously empowering cybersecurity education, there are several stumbling blocks to address on the way to building global crowdsourcing initiatives. While working towards a safer future, contributors to crowdsourced efforts often face issues related to intellectual property rights and the recognition of the significance of individual contributions within the professional network. Ensuring proper recognition for discoveries and contributions to global cyber defense at all levels, from the support of author attribution in the code of a detection rule to sharable digital credentials issued by organizations to recognize exceptional individual involvement and contributions to the crowdsourcing initiatives, is essential to maintaining motivation and fairness. Another challenge is adherence to privacy imperative and compliance with security regulations, including TLP protocol, while sharing information with a wide audience, since disclosure of sensitive information about vulnerabilities or cyber attacks can pose significant risks both to crowdsourcing program contributors and beneficiaries.


How to Harness the Power of Fear and Transform It Into a Leadership Strength

One of the most powerful ways to address fear is to reframe it as a perception rather than an absolute truth. Fear does not objectify threats; it is just one of the mental faculties. By reframing it as a perception, a leader can make proper decisions, attacking the instances of fear. Refocusing the process does not stop fear; it changes the process. Leaders are able to go from impulsive to composed behavior by understanding that fear is a conceptual state rather than an actual one. Calm neurotransmitters like serotonin and endorphins take the role of stress chemicals like cortisol and adrenaline, promoting emotional equilibrium and resilience. For leaders, that shift can be radical. By approaching challenges and approach with strength and rationality, not fear, they can spread the ripple effect into their companies. It's a way of creating an environment in which teams feel empowered, excited and pushed to grow and thrive. ... Recognizing fear as a perceived threat allows leaders to respond with reason and confidence. Mastering fear is a critical leadership skill, fostering innovation and collaboration. By transforming fear into a tool for growth, leaders unlock their full potential and inspire others, paving the way for sustained progress.


Nuclear-Powered Data Centers: When Will SMRs Finally Take Off?

Taking stock of the nuclear-powered data center market in 2024, Alan Howard, principal analyst of cloud and colocation services at Omdia,* said: “It’s nothing short of exciting that Amazon, Google, and Microsoft have all signed deals for nuclear power… and Meta is publicly on the hunt.” Still, these deals are relatively small by the standards of the data center industry, and Howard cautioned against impatience, citing the mid-2030s as the earliest we can expect to see broad commercial availability of nuclear energy in powering data centers. “The reality is that these [nuclear reactors under construction] are essentially test reactors which is part of the long regulatory road nuclear technology companies must follow.” ... One of the chief challenges facing data center companies is the five-to-seven-year permitting and construction timelines for nuclear facilities, according to Ryan Mallory, COO at data center firm Flexential. “Data center companies must begin securing permits, ground space, and operational expertise to prepare for SMRs to become scalable and repeatable by the 2030s,” Mallory said. There are also technological challenges, according to Steven Carlini, chief data center and AI advocate at Schneider Electric. “Integrating SMRs into the existing ecosystem will be complex,” he said.


13 Cybersecurity Predictions for 2025

AI capabilities are awesome, yet I’m finding that most of the AI capabilities being developed are focused on just getting them to work and into the marketplace as soon as possible. We need to do a much better job of incorporating cybersecurity best practices and secure-by-design principles into the creation, operation, and sustainment of AI systems. The AI Security and Incident Response Team (AISIRT)[ii] here at the Software Engineering Institute has discovered numerous material weaknesses and flaws in AI capabilities resulting in vulnerabilities that can be leveraged by hostile entities. AI vulnerabilities are cyber vulnerabilities, and the list of reported vulnerabilities continue to grow. Software engineers are trained to incorporate secure-by-design principles into their work. But neural-network models, including generative AI and LLMs, bring along a wide range of additional kinds of weaknesses and vulnerabilities, and for many of these it is a struggle to develop effective remediations. Until the AI community is able to develop AI-appropriate secure-by-design best practices to augment the secure-by-design practices already familiar to software engineers, I believe we’ll see preventable cyber incidents affecting AI capabilities in 2025. ... Ransomware criminal activity continues to feast on the cyber poor. Cyber criminals have been feasting on those who operate below the cyber poverty line.


Biometrics Institute identifies dire need for clear language in biometrics and AI

Most biometrics experts agree that no one is exactly sure what anyone is talking about. The Biometrics Institute is trying to help, via its Explanatory Dictionary, a resource that aims to capture the nuances in biometric terminology, “considering both formal definitions and how they are perceived by the public – for example, how someone might explain biometrics or AI to a friend.” Because, as of now, there isn’t a standard that is universally agreed-on, nor is there really a clear way to explain biometrics and AI to your neighbour Ted who works in marketing. “There are no universal definitions of biometrics or AI and those put forward by ISO and some governments are either too technical, obtuse or are not fully aligned with one another or are hidden behind paywalls and not accessible to the majority of the general public.” The paper drills down on the semantics of biometric grammar. What does it mean for a biometric application to “have AI”? Conflation of certain terms in both regulatory and public contexts exacerbates the problem. Media struggles to pick apart the web of language, and contributes its own strands in the process. Is a tool “AI-driven,” or “AI-equipped”? Where do algorithms fit in?


How AI Copilots Are Transforming Threat Detection and Response

The rise of AI copilots in cybersecurity is a transformative moment, but it requires a shift in mindset. Security teams should view these tools as partners, not replacements. AI copilots excel at processing vast datasets and identifying patterns, but humans are irreplaceable when it comes to judgment and understanding context. The future of cybersecurity lies in this hybrid approach, where AI enhances human capabilities rather than attempting to replicate them. Business leaders should focus on fostering this collaboration, equipping their teams with the skills and tools needed to work effectively with AI. Additionally, transparency is non-negotiable. Teams must understand how their AI copilots make decisions, ensuring accountability and reducing the risk of errors. This also involves rigorous testing and ongoing monitoring to detect and mitigate biases or vulnerabilities before they can be exploited. ... By empowering security teams with advanced capabilities, businesses can stay ahead of adversaries and secure a resilient future. Looking ahead, AI copilots are just the beginning. As these tools become more advanced, they will evolve beyond copilots into more autonomous AI agents—a shift often referred to as agentic AI. 


Daily Tech Digest - January 24, 2025


Quote for the day:

"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis


What comes after Design thinking

The first and most obvious one is that we can no longer afford to design things solely for humans. We clearly need to think in non-human, non-monocentric terms if we want to achieve real, positive, long-term impact. Second, HCD fell short in making its practitioners think in systems and leverage the power of relationships to really be able to understand and redesign what has not been serving us or our planet. Lastly, while HCD accomplished great feats in designing better products and services that solve today’s challenges, it fell short in broadening horizons so that these products and systems could pave the way for regenerative systems: the ones that go beyond sustainability and actively restore and revitalize ecosystems, communities, and resources create lasting, positive impact. Now, everything that we put out in the world needs to have an answer to how it is contributing to a regenerative future. And in order to build a regenerative future, we need to start prioritizing something that is integral to nature: relationships. We need to grow relational capacity, from designing for better interpersonal relationships to establishing systems that facilitate cross-organizational collaboration. We need to think about relational networks and harness their power to recreate more just, trustful, and better functioning systems. We need to think in communities.


FinOps automation: Raising the bar on lowering cloud costs

Successful FinOps automation requires strategies that exploit efficiencies from every angle of cloud optimization. Good data management, negotiations, data manipulation capabilities, and cloud cost distribution strategies are critical to automating cost-effective solutions to minimize cloud spend. This article focuses on how expert FinOps leaders have focused their automation efforts to achieve the greatest benefits. ... Effective automation relies on well-structured data. Intuit and Roku have demonstrated the importance of robust data management strategies, focusing on AWS accounts and Kubernetes cost allocation. Good data engineering enables transparency, visibility, and accurate budgeting and forecasting. ... Automation efforts should focus on areas with the highest potential for cost savings, such as prepayment optimization and waste reduction. Intuit and Roku have achieved significant savings by targeting these high-cost areas. ... Automation tools should be accessible and user-friendly for engineers managing cloud resources. Intuit and Roku have developed tools that simplify resource management and align costs with responsible teams. Automated reporting and forecasting tools help engineers make informed decisions.


Why CISOs Must Think Clearly Amid Regulatory Chaos

At their core, CISOs are truth sayers — akin to an internal audit committee that assesses risks and makes recommendations to improve an organization's defenses and internal controls. Ultimately, though, it's the board and a company's top executives who set policy and decide what to disclose in public filings. CISOs can and should be a counselor for this group effort because they have the understanding of security risk. And yet, the advice they can offer is limited if they don't have full visibility into an organization's technology stack. "Many oversee a company's IT system, but not the products the company sells. That's crucial when it comes to data-dependent systems and devices that can provide network-access targets to cyber criminals. Those might include medical devices, or sensors and other Internet of Things endpoints used in manufacturing lines, electric grids, and other critical physical infrastructure. In short: A company's defenses are only as strong as the board and its top executives allow it to be. And if there is a breach, as in the case of SolarWinds? CISOs do not determine the materiality of a cybersecurity incident; a company's top executives and its board make that call. The CISO's responsibilities in that scenario involves responding to the incident and conducting the follow-up forensics required to help minimize or avoid future incidents.


Building Secure Multi-Cloud Architectures: A Framework for Modern Enterprise Applications

The technical controls alone cannot secure multi-cloud environments. Organizations must conduct cloud security architecture reviews before implementing any multi-cloud solution. These reviews should focus on: Data flow patterns between clouds Authentication and authorization requirements Compliance obligations across all relevant jurisdictions. Completing these tasks thoroughly and diligently will ensure that multi-cloud security is baked into the architectural layer between the clouds and in the clouds themselves. While thorough architecture reviews establish the foundation, automation brings these security principles to life at scale. Automation provides a major advantage to security operations for multi-cloud environments. By treating infrastructure and security as code, organizations can achieve consistent configurations across clouds, implement automated security testing and enable fast response to security events. This helps with the overall security and operational overhead because it allows us to do more with less and to reduce human error. Our security operations experienced a substantial enhancement when we moved to automated compliance checks. Still, we did not just throw AWS services at the problem. We engaged our security team deeply in the process. 


Scaling Dynamic Application Security Testing (DAST)

One solution is to monitor requests sent to the target web server and extrapolate an OpenAPI Specification based on those requests in real-time. This monitoring could be performed client-side, server-side, or in-between on an API gateway, load-balancer, etc. This is a scalable, automatable solution that does not require each developer’s involvement. Depending on how long it runs, this approach can be limited in comprehensively identifying all web endpoints. For example, if no users called the /logout endpoint, then the /logout endpoint would not be included in the automatically generated OpenAPI Specification. Another solution is to statically analyze the source code for a web service and generate an OpenAPI Specification based on defined API endpoint routes that the automation can gleam from the source code. Microsoft internally prototyped this solution and found it to be non-trivial to reliably discover all API endpoint routes and all parameters by parsing abstract syntax trees without access to a working build environment. This solution was also unable to handle scenarios of dynamically registered API route endpoint handlers. ... To truly scale DAST for thousands of web services, we need to automatically, comprehensively, and deterministically generate OpenAPI Specifications.


Post-Quantum Cryptography 2025: The Enterprise Readiness Gap

"Quantum technology offers a revolutionary approach to cybersecurity, providing businesses with advanced tools to counter emerging threats," said David Close, chief solutions architect at Futurex. By using quantum machine learning algorithms, organizations can detect threats faster and more accurately. These algorithms identify subtle patterns that indicate multi-vector cyberattacks, enabling proactive responses to potential breaches. Innovations such as quantum key distribution and quantum random number generators enable unbreakable encryption and real-time anomaly detection, making them indispensable in fraud prevention and secure communications, Close said. These technologies not only protect sensitive data but also ensure the integrity of financial transactions and authentication protocols. A cornerstone of quantum security is post-quantum cryptography, PQC. Unlike traditional cryptographic methods, PQC algorithms are designed to withstand attacks from quantum computers. Standards recently established by the National Institute of Standards and Technology include algorithms such as Kyber, Dilithium and SPHINCS+, which promise robust protection against future quantum threats.


Tricking the bad guys: realism and robustness are crucial to deception operations

The goal of deception technology, also known as deception techniques, operations, or tools, is to create an environment that attracts and deceives adversaries to divert them from targeting the organization’s crown jewels. Rapid7 defines deception technology as “a category of incident detection and response technology that helps security teams detect, analyze, and defend against advanced threats by enticing attackers to interact with false IT assets deployed within your network.” Most cybersecurity professionals are familiar with the current most common application of deception technology, honeypots, which are computer systems sacrificed to attract malicious actors. But experts say honeypots are merely decoys deployed as part of what should be more overarching efforts to invite shrewd and easily angered adversaries to buy elaborate deceptions. Companies selling honeypots “may not be thinking about what it takes to develop, enact, and roll out an actual deception operation,” Handorf said. “As I stressed, you have to know your infrastructure. You have to have a handle on your inventory, the log analysis in your case. But you also have to think that a deception operation is not a honeypot. It is more than a honeypot. It is a strategy that you have to think about and implement very decisively and with willful intent.”


Effective Techniques to Refocus on Security Posture

If you work in software development, then “technical debt” is a term that likely triggers strong reactions. Foundationally, technical debt serves a similar function to financial debt. When well-managed, both can be used as leverage for further growth opportunities. In the context of engineering, technical debt can help expand product offerings and operations, helping a business grow faster than paying the debt with the opportunities offered from the leverage. On the other hand, debt also comes with risks and the rate of exposure is variable, dependent on circumstance. In the context of security, acceptance of technical debt from End of Life (EoL) software and risky decisions enable threats whose greatest advantage is time, the exact resource that debt leverages. ... The trustworthiness of software is dependent on the exploitable attack surface. Part of that attack surface are exploitable vulnerabilities. If the outcome of the SBOM with a VEX attestation is a deeper understanding of those applicable and exploitable vulnerabilities, coupling that information with exploit predictive analysis like EPSS helps to bring valuable information to decision-making. This type of assessment allows for programmatic decision-making. It allows software suppliers to express risk in the context of their applications and empowers software consumers to escalate on problems worth solving.


Sustainability, grid demands, AI workloads will challenge data center growth in 2025

Uptime expects new and expanded data center developers will be asked to provide or store power to support grids. That means data centers will need to actively collaborate with utilities to manage grid demand and stability, potentially shedding load or using local power sources during peak times. Uptime forecasts that data center operators “running non-latency-sensitive workloads, such as specific AI training tasks, could be financially incentivized or mandated to reduce power use when required.” “The context for all of this is that the [power] grid, even if there were no data centers, would have a problem meeting demand over time. They’re having to invest at a rate that is historically off the charts. It’s not just data centers. It’s electric vehicles. It’s air conditioning. It’s carbonization. But obviously, they are also retiring coal plants and replacing them with renewable plants,” Uptime’s Lawrence explained. “These are much less stable, more intermittent. So, the grid has particular challenges.” ... According to Uptime, infrastructure requirements for next-generation AI will force operators to explore new power architectures, which will drive innovations in data center power delivery. As data centers need to handle much higher power densities, it will throw facilities off balance in terms of how the electrical infrastructure is designed and laid out. 


Is the Industrial Metaverse Transforming the E&U Industry?

One major benefit of the industrial metaverse is that it can monitor equipment issues and hazardous conditions in real time so that any fluctuations in the electrical grid are instantly detected. As they collect data and create simulations, digital twins can also function as proactive tools by predicting potential problems before they escalate. “You can see which components are in early stages of failure,” a Hitachi Energy spokesperson notes in this article. “You can see what the impact of failure is and what the time to failure is, so you’re able to make operational decisions, whether it’s a switching operation, deploying a crew, or scheduling an outage, whatever that looks like.” ... Digital twins also make it possible for operators to simulate and test operational changes in virtual environments before real-world implementation, reducing excessive costs. “While it will not totally replace on-site testing, it can significantly reduce physical testing, lower costs and contribute to an increased quality of the protection system,” Andrea Bonetti, a power system protection specialist at Megger, tells the Switzerland-based International Electrotechnical Commission. Shell is one of several energy providers that use digital twins to enhance operations, according to Digital Twin Insider.