Daily Tech Digest - January 08, 2025

GenAI Won’t Work Until You Nail These 4 Fundamentals

Too often, organizations leap into GenAI fueled by excitement rather than strategic intent. The urgency to appear innovative or keep up with competitors drives rushed implementations without distinct goals. They see GenAI as the “shiny new [toy],” as Kevin Collins, CEO of Charli AI, aptly puts it, but the reality check comes hard and fast: “Getting to that shiny new toy is expensive and complicated.” This rush is reflected in over 30,000 mentions of AI on earnings calls in 2023 alone, signaling widespread enthusiasm but often without the necessary clarity of purpose. ... The shortage of strategic clarity isn’t the only roadblock. Even when organizations manage to identify a business case, they often find themselves hamstrung by another pervasive issue: their data. Messy data hampers organizations’ ability to mature beyond entry-level use cases. Data silos, inconsistent formats and incomplete records create bottlenecks that prevent GenAI from delivering its promised value. ... Weak or nonexistent governance structures expose companies to various ethical, legal and operational risks that can derail their GenAI ambitions. According to data from an Info-Tech Research Group survey, only 33% of GenAI adopters have implemented clear usage policies. 


Inside the AI Data Cycle: Understanding Storage Strategies for Optimised Performance

The AI Data Cycle is a six-stage framework, beginning with the gathering and storing of raw data. In this initial phase, data is collected from multiple sources, with a focus on assessing its quality and diversity, which establishes a strong foundation for the stages that follow. For this phase, high-capacity enterprise hard disk drives (eHDDs) are recommended, as they provide high storage capacity and cost-effectiveness per drive. In the next stage, data is prepared for ingestion, and this is where insight from the initial data collection phase is processed, cleaned and transformed for model training. To support this phase, data centers are upgrading their storage infrastructure – such as implementing fast data lakes – to streamline data preparation and intake. At this point, high-capacity SSDs play a critical role, either augmenting existing HDD storage or enabling the creation of all-flash storage systems for faster, more efficient data handling. Next is the model training phase, where AI algorithms learn to make accurate predictions using the prepared training data. This stage is executed on high-performance supercomputers, which require specialised, high-performing storage to function optimally. 


Buy or Build: Commercial Versus DIY Network Automation

DIY automation can be tailored to your specific network and, in some cases, to meet security or compliance requirements more easily than vendor products. And they come at a great price: free! The cost of a commercial tool is sometimes higher than the value it creates, especially if you have unusual use cases. But DIY tools take time to build and support. Over 50% of organizations in EMA’s survey spend 6-20 hours per week debugging and supporting homegrown tools. Cultural preferences also come into play. While engineers love to grumble about vendors and their products, that doesn’t mean they prefer DIY. In my experience, NetOps teams are often set in their ways, preferring manual processes that do not scale up to match the complexity of modern networks. Many network engineers do not have the coding skills to build good automation, and most don't think about how to tackle problems with automation broadly. The first and most obvious fix for the issues holding back automation is simply for automation tools to get better. They must have broad integrations and be vendor neutral. Deep network mapping capabilities help resolve the issue of legacy networks and reduce the use cases that require DIY. Low or no-code tools help ease budget, staffing, and skills issues.


How HR can lead the way in embracing AI as a catalyst for growth

Common workplace concerns include job displacement, redundancy, bias in AI decision-making, output accuracy, and the handling of sensitive data. Tracy notes that these are legitimate worries that HR must address proactively. “Clear policies are essential. These should outline how AI tools can be used, especially with sensitive data, and safeguards must be in place to protect proprietary information,” she explains. At New Relic, open communication about AI integration has built trust. AI is viewed as a tool to eliminate repetitive tasks, freeing time for employees to focus on strategic initiatives. For instance, their internally developed AI tools support content drafting and research, enabling leaders like Tracy to prioritize high-value activities, such as driving organizational strategy. “By integrating AI thoughtfully and transparently, we’ve created an environment where it’s seen as a partner, not a threat,” Tracy says. This approach fosters trust and positions AI as an ally in smarter, more secure work practices. The key is to highlight how AI can help everyone excel in their roles and elevate the work they do every day. While it’s realistic to acknowledge that some aspects of our jobs—or even certain roles—may evolve with AI, the focus should be on how we integrate it into our workflow and use it to amplify our impact and efficiency,” notes Tracy.


Cloud providers are running out of ‘next big things’

Yes, every cloud provider is now “an AI company,” but let’s be honest — they’re primarily engineering someone else’s innovations into cloud-consumable services. GPT-4 through Microsoft Azure? That’s OpenAI’s innovation. Vector databases? They came from the open source community. Cloud providers are becoming AI implementation platforms rather than AI innovators. ... The root causes of the slowdown in innovation are clear. Market maturity indicates that the foundational issues in cloud computing have mostly been resolved. What’s left are increasingly specialized niche cases. Second, AWS, Azure, and Google Cloud are no longer the disruptors — they’re the defenders of market share. Their focus has shifted from innovation to optimization and retention. A defender’s mindset manifests itself in product strategies. Rather than introducing revolutionary new services, cloud providers are fine-tuning existing offerings. They’re also expanding geographically, with the hyperscalers expected to announce 30 new regions in 2025. However, these expansions are driven more by data sovereignty requirements than innovative new capabilities. This innovation slowdown has profound implications for enterprises. Many organizations bet their digital transformation on cloud-native architectures with continuous innovation. 


Historical Warfare’s Parallels with Cyber Warfare

In 1942, the British considered Singapore nearly impregnable. They fortified its coast heavily, believing any attack would come from the sea. Instead, the Japanese stunned the defenders by advancing overland through dense jungle terrain the British deemed impassable. This unorthodox approach using bicycles in great numbers and small tracks through the jungle enabled the Japanese forces to hit the defences at the weakest point and well ahead of the projected time catching the British defences off guard. In cybersecurity, this corresponds to zero-day vulnerabilities and unconventional attack vectors. Hackers exploit flaws that defenders never saw coming, turning supposedly secure systems into easy marks. The key lesson is to never to grow complacent because you never know what you can be hit with and when. ... Cyber attackers also use psychology against their targets. Phishing emails appeal to curiosity, trust, greed, or fear thus luring victims into clicking malicious links or revealing passwords. Social engineering exploits human nature rather than code and defenders must recognise that people, not just machines, are the frontline. Regular training, clear policies, and an ingrained culture of healthy scepticism which is present in most IT staff can thwart even the most artful psychological ploys.


Insider Threat: Tackling the Complex Challenges of the Enemy Within

Third-party background checking can only go so far. It must be supported by old fashioned and experienced interview techniques. Omri Weinberg, co-founder and CRO at DoControl, explains his methodology “We’re primarily concerned with two types of bad actors. First, there are those looking to use the company’s data for nefarious purposes. These individuals typically have the skills to do the job and then some – they’re often overqualified. They pose a severe threat because they can potentially access and exploit sensitive data or systems.” The second type includes those who oversell their skills and are actually under or way underqualified. “While they might not have malicious intent, they can still cause significant damage through incompetence or by introducing vulnerabilities due to their lack of expertise. For the overqualified potential bad actors, we’re wary of candidates whose skills far exceed the role’s requirements without a clear explanation. For the underqualified group, we look for discrepancies between claimed skills and actual experience or knowledge during interviews.” This means it is important to probe the candidate during the interview to gauge the true skill level of the candidate. “it’s essential that the person evaluating the hire has the technical expertise to make these determinations,” he added.


Raise your data center automation game with easy ecosystem integration

If integrations are the key, then the things you look for to understand whether a product is flashy or meaningful should change. The UI matters, but the way tools are integrated is the truly telling characteristic. What APIs exist? How is data normalized? Are interfaces versioned and maintained across different releases? Can you create complex dashboards that pull things together from different sources using no-code models that don't require source access to contextualize your environment? How are workflows strung together into more complex operations? By changing your focus, you can start to evaluate these platforms based on how well they integrate rather than on how snazzy the time series database interface is. Of course, things like look and feel matter, but anyone who wants to scale their operations will realize that the UI might not even be the dominant consumption model over time. Is your team looking to click their way through to completion? ... Wherever you are in this discovery process, let me offer some simple advice: Expand your purview from the network to the ecosystem and evaluate your options in the context of that ecosystem. When you do that effectively, you should know which solutions are attractive but incremental and which are likely to create more durable value for you and your organization.


Why Scrum Masters Should Grow Their Agile Coaching Skills

More than half of the organizations surveyed report that finding scrum masters with the right combination of skills to meet their evolving demands is very challenging. Notably, 93% of companies seek candidates with strong coaching skills but state that it’s one of the skills hardest to find. Building strong coaching and facilitation skills can help you stand out in the job market and open doors to new career opportunities. As scrum masters are expected to take on increasingly strategic roles, your skills become even more valuable. Senior scrum masters, in particular, are called upon to handle politically sensitive and technically complex situations, bridging gaps between development teams and upper management. Coaching and facilitation skills are requested nearly three times more often for senior scrum master roles than for other positions. Growing these coaching competencies can give you an edge and help you make a bigger impact in your career. ... Who wouldn’t want to move up in their career into roles with greater responsibilities and bigger impact? Regardless of the area of the company you’re in—product, sales, marketing, IT, operations—you’ll need leadership skills to guide people and enable change within the organization. 


Scaling penetration testing through smart automation

Automation undoubtedly has tremendous potential to streamline the penetration testing lifecycle for MSSPs. The most promising areas are the repetitive, data-intensive, and time-consuming aspects of the process. For instance, automated tools can cross-reference vulnerabilities against known exploit databases like CVE, significantly reducing manual research time. They can enhance accuracy by minimizing human error in tasks like calculating CVSS scores. Automation can also drastically reduce the time required to compile, format, and standardize pen-testing reports, which can otherwise take hours or even days depending on the scope of the project. For MSSPs handling multiple client engagements, this could translate into faster project delivery cycles and improved operational efficiency. For their clients – it enables near real-time responses to vulnerabilities, reducing the window of exposure and bolstering their overall security posture. However – and this is crucial – automation should not be treated as a silver bullet. Human expertise remains absolutely indispensable in the testing itself. The human ability to think creatively, to understand complex system interactions, to develop unique attack scenarios that an algorithm might miss—these are irreplaceable. 



Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson

Daily Tech Digest - January 07, 2025

With o3 having reached AGI, OpenAI turns its sights toward superintelligence

One of the challenges of achieving AGI is defining it. As of yet, researchers and the broader industry do not have a concrete description of what it will be and what it will be able to do. The general consensus, though, is that AGI will possess human-level intelligence, be autonomous, have self-understanding, and will be able to “reason” and perform tasks that it was not trained to do. ... Going beyond AGI, “superintelligence” is generally understood to be AI systems that far surpass human intelligence. “With superintelligence, we can do anything else,” Altman wrote. “Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.” He added, “this sounds like science fiction right now, and somewhat crazy to even talk about it.” However, “we’re pretty confident that in the next few years, everyone will see what we see,” he said, emphasizing the need to act “with great care” while still maximizing benefit. ... OpenAI set out to build AGI from its founding in 2015, when the concept of AGI, as Altman put it to Bloomberg, was “nonmainstream.” “We wanted to figure out how to build it and make it broadly beneficial,” he wrote in his blog post. 


Bridging the execution gap – why AI is the new frontier for corporate strategy

Imagine a future where leadership teams are not constrained by outdated processes but empowered by intelligent systems. In this world, CEOs use AI to visualise their entire organisation’s alignment, ensuring every department contributes to strategic goals. Middle managers leverage real-time insights to adapt plans dynamically, while employees understand how their work drives the company’s mission forward. Such an environment fosters resilience, innovation, and engagement. By turning strategy into a living, breathing entity, organisations can adapt to challenges and seize opportunities faster than ever before. The road to this future is not without challenges. Leaders must embrace cultural change, invest in the right technologies, and commit to continuous learning. But the rewards – a thriving, agile organisation capable of navigating the complexities of the modern business landscape – are well worth the effort. The execution gap has plagued organisations for decades, but the tools to overcome it are now within reach. AI is more than a technological advancement; it is the key to unlocking the full potential of corporate strategy. By embracing adaptability and leveraging AI’s transformative capabilities, businesses can ensure their strategies do not just survive but thrive in the face of change.


Google maps the future of AI agents: Five lessons for businesses

Google argues that AI agents represent a fundamental departure from traditional language models. While models like GPT-4o or Google’s Gemini excel at generating single-turn responses, they are limited to what they’ve learned from their training data. AI agents, by contrast, are designed to interact with external systems, learn from real-time data and execute multi-step tasks. “Knowledge [in traditional models] is limited to what is available in their training data,” the paper notes. “Agents extend this knowledge through the connection with external systems via tools.” This difference is not just theoretical. Imagine a traditional language model tasked with recommending a travel itinerary. ... At the heart of an AI agent’s capabilities is its cognitive architecture, which Google describes as a framework for reasoning, planning and decision-making. This architecture, known as the orchestration layer, allows agents to process information in cycles, incorporating new data to refine their actions and decisions. Google compares this process to a chef preparing a meal in a busy kitchen. The chef gathers ingredients, considers the customer’s preferences and adapts the recipe as needed based on feedback or ingredient availability. Similarly, an AI agent gathers data, reasons about its next steps and adjusts its actions to achieve a specific goal.


AI agents will change work forever. Here's how to embrace that transformation

The business world is full of orthodoxies, beliefs that no one questions because they are thought to be "just the way things are". One such orthodoxy is the phrase: "Our people are the difference". A simple Google search can attest to its popularity. Some companies use this orthodoxy as their official or unofficial tagline, a tribute to their employees that they hope sends the right message internally and externally. They hope their employees feel special and customers take this orthodoxy as proof of their human goodness. Other firms use this orthodoxy as part of their explanation of what makes their company different. It's part of their corporate story. It sounds nice, caring, and positive. The only problem is that this orthodoxy is not true. ... Another way to put this is that individual employees are not fixed assets. They do not behave the same way in all conditions. In most cases, employees are adaptable and can absorb and respond to change. The environment, conditions, and potential for relationships cause this capacity to express itself. So, on the one hand, one company's employees are the same as any other company's employees in the same industry. They move from company to company, read the same magazines, attend similar conventions, and learn the same strategies and processes.


Gen AI is transforming the cyber threat landscape by democratizing vulnerability hunting

Identifying potential vulnerabilities is one thing, but writing exploit code that works against them requires a more advanced understanding of security flaws, programming, and the defense mechanisms that exist on the targeted platforms. ... This is one area where LLMs could make a significant impact: bridging the knowledge gap between junior bug hunters and experienced exploit writers. Even generating new variations of existing exploits to bypass detection signatures in firewalls and intrusion prevention systems is a notable development, as many organizations don’t deploy available security patches immediately, instead relying on their security vendors to add detection for known exploits until their patching cycle catches up. ... “AI tools can help less experienced individuals create more sophisticated exploits and obfuscations of their payloads, which aids in bypassing security mechanisms, or providing detailed guidance for exploiting specific vulnerabilities,” NiÈ›escu said. “This, indeed, lowers the entry barrier within the cybersecurity field. At the same time, it can also assist experienced exploit developers by suggesting improvements to existing code, identifying novel attack vectors, or even automating parts of the exploit chain. This could lead to more efficient and effective zero-day exploits.”


GDD: Generative Driven Design

The independent and unidirectional relationship between agentic platform/tool and codebase that defines the Doctor-Patient strategy is also the greatest limiting factor of this strategy, and the severity of this limitation has begun to present itself as a dead end. Two years of agentic tool use in the software development space have surfaced antipatterns that are increasingly recognizable as “bot rot” — indications of poorly applied and problematic generated code. Bot rot stems from agentic tools’ inability to account for, and interact with, the macro architectural design of a project. These tools pepper prompts with lines of context from semantically similar code snippets, which are utterly useless in conveying architecture without a high-level abstraction. Just as a chatbot can manifest a sensible paragraph in a new mystery novel but is unable to thread accurate clues as to “who did it”, isolated code generations pepper the codebase with duplicated business logic and cluttered namespaces. With each generation, bot rot reduces RAG effectiveness and increases the need for human intervention. Because bot rotted code requires a greater cognitive load to modify, developers tend to double down on agentic assistance when working with it, and in turn rapidly accelerate additional bot rotting.


Someone needs to make AI easy

Few developers did a better job of figuring out how to effectively use AI than Simon Willison. In his article “Things we learned about LLMs in 2024,” he simultaneously susses out how much happened in 2024 and why it’s confusing. For example, we’re all told to aggressively use genAI or risk falling behind, but we’re awash in AI-generated “slop” that no one really wants to read. He also points out that LLMs, although marketed as the easy path to AI riches for all who master them, are actually “chainsaws disguised as kitchen knives.” He explains that “they look deceptively simple to use … but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.” If anything, this quagmire got worse in 2024. Incredibly smart people are building incredibly sophisticated systems that leave most developers incredibly frustrated by how to use them effectively.  ... Some of this stems from the inability to trust AI to deliver consistent results, but much of it derives from the fact that we keep loading developers up with AI primitives (similar to cloud primitives like storage, networking, and compute) that force them to do the heavy lifting of turning those foundational building blocks into applications.


Making the most of cryptography, now and in the future

The mathematicians and cryptographers who have worked on these NIST algorithms expect them to last a long time. Thousands of people have already tried to poke holes into them and haven’t yet made any meaningful progress toward defeating them. So, they are “probably” OK for the time being. But as much as we would like to, we cannot mathematically rule out that they cannot be broken. This means that for commercial enterprises looking to migrate to new cryptography, they should be braced to change again and again — whether that is in five years, 10 years, or 50 years. ... Up until now most cryptography was mostly implicit and not under direct control of the management. Putting more controls around cryptography would not only safeguard data today, but it would provide the foundation to make the next transition easier. ... Cryptography is full of single points of failure. Even if your algorithm is bulletproof, you might end up with a faulty implementation. Agility helps us move away from these single points of failure, allowing us to adapt quickly if an algorithm is compromised. It is therefore crucial for CISOs to start thinking about agility and redundancy.


Data 2025 outlook: AI drives a renaissance of data

Though not all the technology building blocks are in place, many already are. Using AI to crawl and enrich metadata? Automatically generate data pipelines? Using regression analysis to flag data and model drift? Using entity extraction to flag personally identifiable information or summarize the content of structured or unstructured data? Applying machine learning to automate data quality resolution and data classification? Applying knowledge graphs to RAG? You get the idea. There are a few technology gaps that we expect will be addressed in 2025, including automating the correlation between data and model lineage, assessing the utility and provenance of unstructured data, and simplifying generation of vector embeddings. We expect in the coming year that bridging data file and model lineage will become commonplace with AI governance tools and services. And we’ll likely look to emerging approaches such as data observability to transform data quality practices from reactive to proactive. Let’s start with governance. In the data world, this is hardly a new discipline. Though data governance over the years has drawn more lip service than practice, for structured data, the underlying technologies for managing data quality, privacy, security and compliance are arguably more established than for AI. 


Beware the Rise of the Autonomous Cyber Attacker

Research has already shown that teams of AIs working together can find and exploit zero-day vulnerabilities. A team at the University of Illinois Urbana-Champaign created a “task force” of AI agents that worked as a supervised unit and effectively exploited vulnerabilities they had no prior knowledge of. In a recent report, OpenAI also cited three threat actors that used ChatGPT to discover vulnerabilities, research targets, write and debug malware and setup command and control infrastructure. The company said the activity offered these groups “limited, incremental (new) capabilities” to carry out malicious cyber tasks. ... “Darker” AI use has, in part, prompted many of today’s top thinkers to support regulations. This year, OpenAI CEO Sam Altman said: “I’m not interested in the killer robots walking on the street … things going wrong. I’m much more interested in the very subtle societal misalignments, where we just have these systems out in society and through no particular ill intention, things go horribly wrong.” ... Theoretically, regulation may reduce unintended or dangerous use among legitimate users, but I’m certain that the criminal economy will appropriate this technology. As CISOs deploy AI more broadly, attackers’ abilities will concurrently soar.



Quote for the day:

"Leadership is a dynamic process that expresses our skill, our aspirations, and our essence as human beings." -- Catherine Robinson-Walker

Daily Tech Digest - January 06, 2025

Should States Ban Mandatory Human Microchip Implants?

“U.S. states are increasingly enacting legislation to pre-emptively ban employers from forcing workers to be ‘microchipped,’ which entails having a subdermal chip surgically inserted between one’s thumb and index finger," wrote the authors of the report. "Internationally, more than 50,000 people have elected to receive microchip implants to serve as their swipe keys, credit cards, and means to instantaneously share social media information. This technology is especially popular in Sweden, where chip implants are more widely accepted to use for gym access, e-tickets on transit systems, and to store emergency contact information.” ... “California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision," Singularity Hub wrote. "In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.” That same piece quotes Alan Mardinly, who is director of biology at Science Corporation, as saying that the advantages of a biohybrid implant are that it "can dramatically change the scaling laws of how many neuros you can interface with versus how much damage you do to the brain."


AI revolution drives demand for specialized chips, reshaping global markets

There’s now a shift toward smaller AI models that only use internal corporate data, allowing for more secure and customizable genAI applications and AI agents. At the same time, Edge AI is taking hold, because it allows AI processing to happen on devices (including PCs, smartphones, vehicles and IoT devices), reducing reliance on cloud infrastructure and spurring demand for efficient, low-power chips. “The challenge is if you’re going to bring AI to the masses, you’re going to have to change the way you architect your solution; I think this is where Nvidia will be challenged because you can’t use a big, complex GPU to address endpoints,” said Mario Morales, a group vice president at research firm IDC. “So, there’s going to be an opportunity for new companies to come in — companies like Qualcomm, ST Micro, Renesas, Ambarella and all these companies that have a lot of the technology, but now it’ll be about how to use it. ... Enterprises and other organizations are also shifting their focus from single AI models to multimodal AI, or LLMs capable of processing and integrating multiple types of data or “modalities,” such as text, images, audio, video, and sensory input. The input from diverse resources creates a more comprehensive understanding of that data and enhances performance across tasks.


How to Address an Overlooked Aspect of Identity Security: Non-human Identities

Compromised identities and credentials are the No. 1 tactic for cyber threat actors and ransomware campaigns to break into organizational networks and spread and move laterally. Identity is the most vulnerable element in an organization’s attack surface because there is a significant misperception around what identity infrastructure (IDP, Okta, and other IT solutions) and identity security providers (PAM, MFA, etc.) can protect. Each solution only protects the silo that it is set up to secure, not an organization’s complete identity landscape, including human and non-human identities (NHIs), privileged and non-privileged users, on-prem and cloud environments, IT and OT infrastructure, and many other areas that go unmanaged and unprotected. ... Most organizations use a combination of on-prem management tools, a mix of one or more cloud identity providers (IdPs), and a handful of identity solutions (PAM, IGA) to secure identities. But each tool operates in a silo, leaving gaps and blind spots that cause increased attacks and blind spots. 8 out of 10 organizations cannot prevent the misuse of service accounts in real-time due to visibility and security being sporadic or missing. NHIs fly under the radar as security and identity teams sometimes don’t even know they exist. 


Version Control in Agile: Best Practices for Teams

With multiple developers working on different features, fixes, or updates simultaneously, it’s easy for code to overlap or conflict without clear guidelines. Having a structured branching approach prevents confusion and minimizes the risk of one developer’s work interfering with another’s. ... One of the cornerstones of good version control is making small, frequent commits. In Agile development, progress happens in iterations, and version control should follow that same mindset. Large, infrequent commits can cause headaches when it’s time to merge, increasing the chances of conflicts and making it harder to pinpoint the source of issues. Small, regular commits, on the other hand, make it easier to track changes, test new functionality, and resolve conflicts early before they grow into bigger problems. ... An organized repository is crucial to maintaining productivity. Over time, it’s easy for the repository to become cluttered with outdated branches, unnecessary files, or poorly named commits. This clutter slows down development, making it harder for team members to navigate and find what they need. Teams should regularly review their repositories and remove unused branches or files that are no longer relevant. 


Abusing MLOps platforms to compromise ML models and enterprise data lakes

Machine learning operations (MLOps) is the practice of deploying and maintaining ML models in a secure, efficient and reliable way. The goal of MLOps is to provide a consistent and automated process to be able to rapidly get an ML model into production for use by ML technologies. ... There are several well-known attacks that can be performed against the MLOps lifecycle to affect the confidentiality, integrity and availability of ML models and associated data. However, performing these attacks against an MLOps platform using stolen credentials has not been covered in public security research. ... Data poisoning: This attack involves an attacker having access to the raw data being used in the “Design” phase of the MLOps lifecycle to include attacker-provided data or being able to directly modify a training dataset. The goal of a data poisoning attack is to be able to influence the data that is being trained in an ML model and eventually deployed to production. ... Model extraction attacks involve the ability of an attacker to steal a trained ML model that is deployed in production. An attacker could use a stolen model to extract sensitive training data such as the training weights used, or to use the predictive capabilities used in the model for their own financial gain. 


Get Going With GitOps

GitOps implementations have a significant impact on infrastructure automation by providing a standardized, repeatable process for managing infrastructure as code, Rose says. The approach allows faster, more reliable deployments and simplifies the maintenance of infrastructure consistency across diverse environments, from development to production. "By treating infrastructure configurations as versioned artifacts in Git, GitOps brings the same level of control and automation to infrastructure that developers have enjoyed with application code." ... GitOps' primary benefit is its ability to enable peer review for configuration changes, Peele says. "It fosters collaboration and improves the quality of application deployment." He adds that it also empowers developers -- even those without prior operations experience -- to control application deployment, making the process more efficient and streamlined. Another benefit is GitOps' ability to allow teams to push minimum viable changes more easily, thanks to faster and more frequent deployments, says Siri Varma Vegiraju, a Microsoft software engineer. "Using this strategy allows teams to deploy multiple times a day and quickly revert changes if issues arise," he explains via email. 


Balancing proprietary and open-source tools in cyber threat research

First, it is important to assess the requirements of an organization by identifying the capabilities needed, such as threat intelligence platforms or malware analysis tools. Next, evaluating open-source tools which can be cost-effective and customizable, but may require community support and frequent updates. In contrast, proprietary tools could offer advanced features, dedicated support, and better integration with other products. Finally, think about scalability and flexibility, as future growth may necessitate scalable solutions. ... The technology is not magic, but it is a powerful tool to speed up processes and bolster security procedures while also reducing the gap between advanced and junior analysts. However, as of today, the technology still requires verification and validation. Globally, the need for security experts with a dual skill set in security and AI will be in high demand. Because the adoption of generative AI systems increases, we need people who understand these technologies because threat actors are also learning. ... If a CISO needs to evaluate effectiveness of these tools, they first need to understand their needs and pain points and then seek guidance from experts. Adopting generative AI security solutions just because it is the latest trend is not the right approach.


Get your IT infrastructure AI-ready

Artificial intelligence adoption is a challenge many CIOs grapple with as they look to the future. Before jumping in, their teams must possess practical knowledge, skills, and resources to implement AI effectively. ... AI implementation is costly and the training of AI models requires a substantial investment. "To realize the potential, you have to pay attention to what it's going to take to get it done, how much it's going to cost, and make sure you're getting a benefit," Ramaswami said. "And then you have to go get it done." GenAI has rapidly transformed from an experimental technology to an essential business tool, with adoption rates more than doubling in 2024, according to a recent study by AI at Wharton ... According to Donahue, IT teams are exploring three key elements: choosing language models, leveraging AI from cloud services, and building a hybrid multicloud operating model to get the best of on-premise and public cloud services. "We're finding that very, very, very few people will build their own language model," he said. "That's because building a language model in-house is like building a car in the garage out of spare parts." Companies look to cloud-based language models, but must scrutinize security and governance capabilities while controlling cost over time. 


What is an EPMO? Your organization’s strategy navigator

The key is to ensure the entire strategy lifecycle is set up for success rather than endlessly iterating to perfect strategy execution. Without properly defining, governing, and prioritizing initiatives upfront, even the best delivery teams will struggle to achieve business goals in a way that drives the right return for the organization’s investment. For most organizations, there’s more than one gap preventing desired results. ... The EPMO’s job is to strip away unnecessary complexity and create frameworks that empower teams to deliver faster, more effectively, and with greater focus. PMO leaders should ask how this process helps to hit business goals faster. So by eliminating redundant meetings and scaling governance to match project size and risk, delivery timelines can shorten. This kind of targeted adjustment keeps momentum high without sacrificing quality or control. ... For an EPMO to be effective, ideally it needs to report directly to the C-suite. This matters because proximity equals influence. When the EPMO has visibility at the top, it can drive alignment across departments, break down silos, drive accountability, and ensure initiatives stay connected to overall business objectives serving as the strategy navigator for the C-suite.


Data Center Hardware in 2025: What’s Changing and Why It Matters

DPUs can handle tasks like network traffic management, which would otherwise fall to CPUs. In this way, DPUs reduce the load placed on CPUs, ultimately making greater computing capacity available to applications. DPUs have been around for several years, but they’ve become particularly important as a way of boosting the performance of resource-hungry workloads, like AI training, by completing AI accelerators. This is why I think DPUs are about to have their moment. ... Recent events have underscored the risk of security threats linked to physical hardware devices. And while I doubt anyone is currently plotting to blow up data centers by placing secret bombs inside servers, I do suspect there are threat actors out there vying to do things like plant malicious firmware on servers as a way of creating backdoors that they can use to hack into data centers. For this reason, I think we’ll see an increased focus in 2025 on validating the origins of data center hardware and ensuring that no unauthorized parties had access to equipment during the manufacturing and shipping processes. Traditional security controls will remain important, too, but I’m betting on hardware security becoming a more intense area of concern in the year ahead.



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - January 05, 2025

Phantom data centers: What they are (or aren’t) and why they’re hampering the true promise of AI

Fake data centers represent an urgent bottleneck in scaling data infrastructure to keep up with compute demand. This emerging phenomenon is preventing capital from flowing where it actually needs to. Any enterprise that can help solve this problem — perhaps leveraging AI to solve a problem created by AI — will have a significant edge. ... As utilities struggle to sort fact from fiction, the grid itself becomes a bottleneck. McKinsey recently estimated that global data center demand could reach up to 152 gigawatts by 2030, adding 250 terawatt-hours of new electricity demand. In the U.S., data centers alone could account for 8% of total power demand by 2030, a staggering figure considering how little demand has grown in the last two decades. Yet, the grid is not ready for this influx. Interconnection and transmission issues are rampant, with estimates suggesting the U.S. could run out of power capacity by 2027 to 2029 if alternative solutions aren’t found. Developers are increasingly turning to on-site generation like gas turbines or microgrids to avoid the interconnection bottleneck, but these stopgaps only serve to highlight the grid’s limitations.


Understanding And Preparing For The 7 Levels Of AI Agents

Task-specialized agents excel in somewhat narrow domains, often outperforming humans in specific tasks by collaborating with domain experts to complete well-defined activities. These agents are the backbone of many modern AI applications, from fraud detection algorithms to medical imaging systems. Their origins trace back to the expert systems of the 1970s and 1980s, like MYCIN, a rule-based system for diagnosing infections. ... Context-aware agents distinguish themselves by their ability to handle ambiguity, dynamic scenarios, and synthesize a variety of complex inputs. These agents analyze historical data, real-time streams, and unstructured information to adapt and respond intelligently, even in unpredictable scenarios. ... The idea of self-reflective agents ventures into speculative territory. These systems would be capable of introspection and self-improvement. The concept has roots in philosophical discussions about consciousness, first introduced by Alan Turing in his early work on machine intelligence and later explored by thinkers like David Chalmers. Self-reflective agents would analyze their own decision-making processes and refine their algorithms autonomously, much like a human reflects on past actions to improve future behavior.


The 7 Key Software Testing Principles: Why They Matter and How They Work in Practice

Identifying defects early in the software development lifecycle is critical because the cost and effort to fix issues grow exponentially as development progresses. Early testing not only minimizes these risks but also streamlines the development process by addressing potential problems when they are most manageable and least expensive. This proactive approach saves time, reduces costs, and ensures a smoother path to delivering high-quality software. ... The pesticide paradox suggests that repeatedly running the same set of tests will not uncover new or previously unknown defects. To continue identifying issues effectively, test methodologies must evolve by incorporating new tests, updating existing test cases, or modifying test steps. This ongoing refinement ensures that testing remains relevant and capable of discovering previously hidden problems. ... Test strategies must be tailored to the specific context of the software being tested. The requirements for different types of software—such as a mobile app, a high-transaction e-commerce website, or a business-critical enterprise application—vary significantly. As a result, testing methodologies should be customized to address the unique needs of each type of application, ensuring that testing is both effective and relevant to the software's intended use and environment.


This Year, RISC-V Laptops Really Arrive

DeepComputing is now working in partnership with Framework, a laptop maker founded in 2019 with the mission to “fix consumer electronics,” as it’s put on the company’s website. Framework sells modular, user-repairable laptops that owners can keep indefinitely, upgrading parts (including those that can’t usually be replaced, like the mainboard and display) over time. “The Framework laptop mainboard is a place for board developers to come in and create their own,” says Patel. The company hopes its laptops can accelerate the adoption of open-source hardware by offering a platform where board makers can “deliver system-level solutions,” Patel adds, without the need to design their own laptop in-house. ... The DeepComputing DC-Roma II laptop marked a major milestone for open source computing, and not just because it shipped with Ubuntu installed. It was the first RISC-V laptop to receive widespread media coverage, especially on YouTube, where video reviews of the DC-Roma II  collectively received more than a million views. ... Balaji Baktha, Ventana’s founder and CEO, is adamant that RISC-V chips will go toe-to-toe with x86 and Arm across a variety of products. “There’s nothing that is ISA specific that determines if you can make something high performance, or not,” he says. “It’s the implementation of the microarchitecture that matters.”


The cloud architecture renaissance of 2025

First, get your house in order. The next three to six months should be spent deep-diving into current cloud spending and utilization patterns. I’m talking about actual numbers, not the sanitized versions you show executives. Map out your AI and machine learning (ML) workload projections because, trust me, they will explode beyond your current estimates. While you’re at it, identify which workloads in your public cloud deployments are bleeding money—you’ll be shocked at what you find. Next, develop a workload placement strategy that makes sense. Consider data gravity, performance requirements, and regulatory constraints. This isn’t about following the latest trend; it’s about making decisions that align with business realities. Create explicit ROI models for your hybrid and private cloud investments. Now, let’s talk about the technical architecture. The organizational piece is critical, and most enterprises get it wrong. Establish a Cloud Economics Office that combines infrastructure specialists, data scientists, financial analysts, and security experts. This is not just another IT team; it is a business function that must drive real value. Investment priorities need to shift, too. Focus on automated orchestration tools, cloud management platforms, and data fabric solutions.


How datacenters use water and why kicking the habit is nearly impossible

While dry coolers and chillers may not consume water onsite, they aren't without compromise. These technologies consume substantially more power from the local grid and potentially result in higher indirect water consumption. According to the US Energy Information Administration, the US sources roughly 89 percent of its power from natural gas, nuclear, and coal plants. Many of these plants employ steam turbines to generate power, which consumes a lot of water in the process. Ironically, while evaporative coolers are why datacenters consume so much water onsite, the same technology is commonly employed to reduce the amount of water lost to steam. Even still the amount of water consumed through energy generation far exceeds that of modern datacenters. ... Understanding that datacenters are, with few exceptions, always going to use some amount of water, there are still plenty of ways operators are looking to reduce direct and indirect consumption. One of the most obvious is matching water flow rates to facility load and utilizing free cooling wherever possible. Using a combination of sensors and software automation to monitor pumps and filters at facilities utilizing evaporative cooling, Sharp says Digital Realty has observed a 15 percent reduction in overall water usage.


Data centres in space: they’re a brilliant idea, but a herculean challenge

Data centres beyond Earth’s atmosphere would have access to continuous solar energy and could be naturally cooled by the vacuum of space. Away from terrestrial issues like planning permission, such facilities could be rapidly deployed and expanded as the demand for more data keeps increasing. It may sound like something from a sci-fi novel, but this concept has been gaining more attention as space technology has advanced and the need for sustainable and scalable data centres has become apparent. ... Space weather, such as solar flares could disrupt operations, while collisions with debris are a major worry – rather offsetting the fact that space-based data centres don’t have to fear earthquakes or floods. Advanced shielding could protect against things like radiation and micrometeoroids, but it will probably only do so much – particularly as Earth’s orbit becomes ever more crowded. To fix damaged facilities, advances in robotics and automation will of course help, but remote maintenance may not be able to address all issues. Sending repair crews remains a very complex and costly affair, and though the falling cost of space launches will again help here, it is still likely to be a huge burden for a few decades to come. In addition, disposing of data centre waste takes on a whole new level of complexity off-planet.


India’s Digital Data Protection Framework: Safety, Trust and Resilience

The draft rules cover various key areas, including the responsibilities of Data Fiduciaries, the role of Consent Managers, and protocols for State Data Processing, particularly in contexts like the distribution of subsidies and public services. They also detail measures for Breach Notifications, mechanisms for individuals to exercise their Data Rights, and special provisions for processing data related to children and persons with disabilities. The Data Protection Board, central to the enforcement of the Act, is set to function as a fully digital office, streamlining its operations and improving accessibility. Additionally, the rules outline procedures for appealing decisions through the Appellate Tribunal, ensuring accountability at every stage. One of the defining aspects of the draft rules is their alignment with the SARAL framework, which emphasises simplicity, clarity, and contextual definitions. To aid public understanding, illustrative examples and explanatory notes have been included, making the document accessible to stakeholders across industries, government bodies, and civil society. Both the draft rules and the accompanying explanatory notes are available on the MeitY website for public review and consultation. While legislative measures are being formalised, the government has swiftly addressed recent data breaches.


The Rise of AI Agents and Data-Driven Decisions

“In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.” Kawasaki emphasizes the developer-centric benefits as well. “AI agents will become faster and easier to build as low-code and no-code platforms mature, reducing the complexity of creating intelligent, AI-powered scenarios,” he says. ... “AI will play a transformative role in the fortification of cyber security by addressing challenges like scalability, prioritization and speed to detection. Unfortunately, cyber threats have become commonplace on the network and attackers are becoming more sophisticated in their methods – many times operating at a threshold that is very difficult to detect. As a result, organizations that fail to integrate an AI capability into their defense strategy risk being exposed to business-altering vulnerabilities. AI’s ability to monitor vast networks for imperceptible anomalies allows organizations to prioritize the most critical threats in real-time.”


New HIPAA Cybersecurity Rules Pull No Punches

Since the beginning, HIPAA has always been the best, yet insufficient, regulation dictating cybersecurity for the healthcare industry. "[There's] a history of the focus being in the wrong place because of the way HIPAA was laid out in the mid-1990s," says Errol Weiss, chief information security officer (CISO) of the Healthcare Information Sharing and Analysis Center (Health-ISAC). ... The newly proposed Security Rule aims to fix things up, with a laundry list of new requirements that touch on patch management, access controls, multifactor authentication (MFA), encryption, backup and recovery, incident reporting, risk assessments, compliance audits, and more. As Lawrence Pingree, vice president at Dispersive, acknowledges, "People have a love-hate relationship with regulations. But there's a lot of good that comes from HIPAA becoming a lot more prescriptive. Whenever you are more specific about the security controls that they must apply, the better off you are." ... Joseph J. Lazzarotti, principal at Jackson Lewis P.C., says provision 164.306 allowed for the kind of flexibility businesses always ask for: "That we're not expecting the same thing from every solo practitioner on Main Street in the Midwest versus the large hospital on the East Coast. There are obviously going to be different expectations for compliance."



Quote for the day:

“Do the best you can until you know better. Then when you know better, do better.” -- Maya Angelou

Daily Tech Digest - January 03, 2025

Tech predictions 2025 – what could be in store next year?

In 2025, we will hear of numerous cases where threat actors trick a corporate Gen AI solution into giving up sensitive information and causing high-profile data breaches. Many enterprises are using Gen AI to build customer-facing chatbots, in order to aid everything from bookings to customer service. Indeed, in order to be useful, LLMs must ultimately be granted access to information and systems in order to answer questions and take actions that a human would otherwise have been tasked with. As with any new technology, we will witness numerous corporations grant LLMs access to huge amounts of potentially sensitive data, without appropriate security considerations. ... The future of work won’t be a binary choice between humans or machines.  It will be an “and.” AI-powered humanoids will form a part of the future workforce, and we will likely see the first instance happen next year. This will force companies to completely reimagine their workplace dynamics – and the technology that powers them. ... At the same time, organisations must ensure their security postures keep pace. Not only to ensure the data being processed by humanoids is kept safe, but also to keep the humanoids safeguarded from hacking and threatening tweaks to their software and commands. 


7 Private Cloud Trends to Watch in 2025

A lot of organizations are repatriating workloads to private cloud from public cloud, but Rick Clark, global head of cloud advisory at digital transformation solutions company UST warns they aren’t giving it much forethought, like they did earlier when migrating to public clouds. As a result, they’re not getting the ROI they hope for. “We haven’t still figured out what is appropriate for workloads. I’m seeing companies wanting to move back the percentage of their workload to reduce cost without really understanding what the value is so they’re devaluing what they're doing,” says Clark. ... Artificial intelligence and automation are also set to play a crucial role in private cloud management. They enable businesses to handle growing complexity by automating resource optimization, enhancing threat detection, and managing costs. “The ongoing talent shortage in cybersecurity makes [AI and automation] especially valuable. By reducing manual workloads, AI allows companies to do more with fewer resources,” says Trevor Horwitz, CISO and founder at cybersecurity, consulting, and compliance services provider. ... Security affects all aspects of a cloud journey, including the calculus of when and where to use private cloud environments. One significant challenge is making sure that all layers of the stack have detection and response capability.


Agility in Action: Elevating Safety through Facial Recognition

Facial Recognition Technology (FRT) stands out as a leading solution to these problems, protecting not only the physical boundaries but also the organization’s overall integrity. Through precise identity verification and user validation, FRT considerably lowers the possibility of unauthorized access. Organizations, irrespective of size, can benefit from this technology, which offers improved security and operational effectiveness. ... A comprehensive physical security program with interconnected elements serves as the backbone of any security infrastructure. Regulating who can enter or exit a facility is vital. Effective systems include traditional mechanical methods, such as locks and keys, as well as electronic solutions like RFID cards. By using these methods, only authorized persons are able to enter. Nonetheless, a technological solution that works with many Original Equipment Manufacturers (OEMs) is required to successfully counter today’s dangers. In addition to guaranteeing general user convenience, this technology should give top priority to data privacy and safety compliance.
Effective physical security is built on deterring unauthorized entry and identifying people of interest. This can include anything from physical security personnel to surveillance and access control systems.


Strategies for Managing Data Debt in Growing Organizations

Not all data debt is created equal. Growing organizations experiencing data sprawl at an expanding rate must conduct a thorough impact assessment to determine which aspects of their data debt are most harmful to operational efficiency and strategic initiatives. An effective approach involves quantifying the potential risks associated with each type of debt – such as compliance violations or lost customer insights – and calculating the opportunity cost of maintaining versus mitigating them. ... A core approach to managing data debt is to establish strong data governance practices that address inconsistencies and fragmentation. Before anything else, you must establish an adequate access control system and ensure its imperviousness. Next, you must think about implementing robust validation mechanisms that will help prevent further debt accumulation. Data governance frameworks provide a foundation for minimizing ad hoc fixes, which are the primary drivers of data debt. ... An architectural shift that facilitates scalability can help avoid the bottlenecks that arise when data outgrows its infrastructure. Technologies like cloud platforms offer scalability without heavy up-front investments, allowing organizations to expand their capacity in line with their growth.


Secure by design vs by default – which software development concept is better?

The challenge here is that, while from a security perspective we may agree that it is wise, it could inevitably put developers and vendors at a competitive disadvantage. Those who don’t prioritize secure-by-design can get features, functionality, and products out to market faster, leading to potentially more market share, revenue, customer attraction/retention, and more. Additionally, many vendors are venture-capital backed, which comes with expectations of return on investment — and the reality that cyber is just one of many risks their business is facing. They must maintain market share, hit revenue targets, deliver customer satisfaction, raise brand awareness/exposure, and achieve the most advantageous business outcomes. ... Secure-by-default development focuses on ensuring that software components arrive at the end-user with all security features and functions fully implemented, with the goal of providing maximum security right out of the box. Most cyber professionals have experienced having to apply CIS Benchmarks, DISA STIGs, vendor guidance and so on to harden a new product or software to ensure we reduce its attack surface. Secure-by-default flips that paradigm on its head so that products arrive hardened and require customers to roll back or loosen the hardened configurations to tailor them to their needs.


The modern CISO is a cornerstone of organizational success

Historically, CISOs focused on technical responsibilities, including managing firewalls, monitoring networks, and responding to breaches. Today, they are integral to the C-suite, contributing to decisions that align security initiatives with organizational goals. This shift in responsibilities reflects the growing realization that security is not just an IT function but a critical enabler of business goals, customer trust, and competitive advantage. CISOs are increasingly embedded in the strategic planning process, ensuring that cybersecurity initiatives support overall business goals rather than operate as standalone activities. ... One of the most critical aspects of the modern CISO role is integrating security into operational processes without disrupting productivity. This involves working closely with operations teams to design workflows prioritizing efficiency and security. This aspect of their responsibility ensures that security does not become a bottleneck for business operations but enhances operational resilience, efficiency, and productivity. ... The CISO of tomorrow will redefine success by aligning cybersecurity with business objectives, fostering a culture of shared responsibility, and driving resilience in the face of emerging risks like AI-driven attacks, quantum threats, and global regulatory pressures.


Key Infrastructure Modernization Trends for Enterprises

Cloud providers and data centers need advanced cooling technologies, including rear-door heat exchange, immersion and direct-to-chip systems. Sustainable power sources such as solar and wind must supplement traditional energy resources. These infrastructure changes will support new chip generations, increased rack densities and expanding AI requirements while enabling edge computing use cases. "Liquid cooling has evolved to move from cooling the broader data center environment to getting closer and even within the infrastructure," Hewitt said. "Liquid-cooled infrastructure remains niche today in terms of use cases but will become more predominant as next generations of GPUs and CPUs increase in power consumption and heat production." ... Document existing business processes and workflows to improve visibility and identify gaps suitable for AI implementation. Organizations must organize data for AI tools that can bring in improvements, keep track of where the data resides to organize it for AI use, build internal guidelines for training and testing AI-driven workflows, and create robust controls for processes that incorporate AI agents.


Being Functionless: How to Develop a Serverless Mindset to Write Less Code!

As the adoption of FaaS increased, cloud providers added a variety of language runtimes to cater to different computational needs, skills, etc., offering something for most programmers. Language runtimes such as Java, .NET, Node.js, Python, Ruby, Go, etc., are the most popular and widely adopted. However, this also brings some challenges to organizations adopting serverless technology. More than technology challenges, these are mindset challenges for engineers. ... Sustainability is a crucial aspect of modern cloud operation. Consuming renewable energy, reducing carbon footprint, and achieving green energy targets are top priorities for cloud providers. Cloud providers invest in efficient power and cooling technologies and operate an efficient server population to achieve higher utilization. For this reason, AWS recommends using managed services for efficient cloud operation, as part of their Well-Architected Framework best practices for sustainability. ... For engineers new to serverless, equipping their minds to its needs can be challenging. Hence, you hear about the serverless mindset as a prerequisite to adopting serverless. This is because working with serverless requires a new way of thinking, developing, and operating applications in the cloud. 


Unlocking opportunities for growth with sovereign cloud

Although there is no standard definition of what constitutes a “sovereign cloud,” there is a general understanding that it must ensure sovereignty at three fundamental levels: data, operations, and infrastructure. Sovereign cloud solutions, therefore, have highly demanding requirements when it comes to digital security and the protection of sensitive data, from technical, operational, and legal perspectives. The sovereign cloud concept also opens up avenues for competition and innovation, particularly among local cloud service providers within the UK. In a recent PwC survey, 78% of UK business leaders said they have adopted cloud in most or all areas of their organisations. However, many of these cloud providers operate and function outside of the country, usually across the pond. The development of sovereign cloud offerings provides the perfect push for UK cloud service providers to increase their market share, providing local tools to power local innovation. For a large-scale, accessible, and competitive sovereign cloud ecosystem to emerge, a combination of certain factors is essential. Firstly, partnerships are crucial. Developing local sovereign cloud solutions that offer the same benefits and ease of use as large hyperscalers is a significant challenge.


The Tipping Point: India's Data Center Revolution

"Data explosion and data localization are paving the way for a data center revolution in India. The low data tariff plans, access to affordable smartphones, adoption of new technologies and growing user base of social media, e-commerce, gaming and OTT platforms are some of the key triggers for data explosion. Also, AI-led demand, which is expected to increase multi-fold in the next 3-5 years, presents significant opportunities. This, coupled with favourable regulatory policies from the Central and State governments, the draft Digital Personal Data Protection Bill, and the infrastructure status are supporting the growth prospects," said Anupama Reddy, Vice President and Co-Group Head - Corporate Ratings, ICRA. ... The high-octane data center industry comes with its own set of challenges. The data center industry faces high operational costs alongside challenges in scalability, cybersecurity, sustainability, and skilled workforce. Power and cooling are major cost drivers, with data centers consuming 1-1.5 per cent of global electricity. Advanced cooling solutions and energy-efficient hardware can help reduce energy costs while supporting environmental goals.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree