Daily Tech Digest - July 16, 2025


Quote for the day:

"Whatever the mind of man can conceive and believe, it can achieve." -- Napoleon Hill


The Seventh Wave: How AI Will Change the Technology Industry

AI presents three threats to the software industry: Cheap code: TuringBots, using generative AI to create software, threatens the low-code/no-code players. Cheap replacement: Software systems, be they CRM or ERP, are structured databases – repositories for client records or financial records. Generative AI, coupled with agentic AI, holds out the promise of a new way to manage this data, opening the door to an enterprising generation of tech companies that will offer AI CRM, AI financials, AI database, AI logistics, etc. ... Better functionality: AI-native systems will continually learn and flex and adapt without millions of dollars of consulting and customization. They hold the promise of being up to date and always ready to take on new business problems and challenges without rebuilds. When the business and process changes, the tech will learn and change. ... On one hand, the legacy software systems that PwC, Deloitte, and others have implemented for decades and that comprise much of their expertise will be challenged in the short term and shrink in the long term. Simultaneously, there will be a massive demand for expertise in AI. Cognizant, Capgemini, and others will be called on to help companies implement AI computing systems and migrate away from legacy vendors. Forrester believes that the tech services sector will grow by 3.6% in 2025.


Software Security Imperative: Forging a Unified Standard of Care

The debate surrounding liability in the open source ecosystem requires careful consideration. Imposing direct liability on individual open source maintainers could stifle the very innovation that drives the industry forward. It risks dismantling the vast ecosystem that countless developers rely upon. ... The software bill of materials (SBOM) is rapidly transitioning from a nascent concept to an undeniable business necessity. As regulatory pressures intensify, driven by a growing awareness of software supply chain risks, a robust SBOM strategy is becoming critical for organizational survival in the tech landscape. But the value of SBOMs extends far beyond a single software development project. While often considered for open source software, an SBOM provides visibility across the entire software ecosystem. It illuminates components from third-party commercial software, helps manage data across merged projects and validates code from external contributors or subcontractors — any code integrated into a larger system. ... The path to a secure digital future requires commitment from all stakeholders. Technology companies must adopt comprehensive security practices, regulators must craft thoughtful policies that encourage innovation while holding organizations accountable and the broader ecosystem must support the collaborative development of practical and effective standards.


The 4 Types of Project Managers

The prophet type is all about taking risks and pushing boundaries. They don’t play by the rules; they make their own. And they’re not just thinking outside the box, they’re throwing the box away altogether. It’s like a rebel without a cause, except this rebel has a cause – growth. These visionaries thrive in ambiguity and uncertainty, seeing potential where others see only chaos or impossibility. They often face resistance from more conservative team members who prefer predictable outcomes and established processes. ... The gambler type is all about taking chances and making big bets. They’re not afraid to roll the dice and see what happens. And while they play by the rules of the game, they don’t have a good business case to back up their bets. It’s like convincing your boss to let you play video games all day because you just have a hunch it will improve your productivity. But don’t worry, the gambler type isn’t just blindly throwing money around. They seek to engage other members of the organization who are also up for a little risk-taking. ... The expert type is all about challenging the existing strategy by pursuing growth opportunities that lie outside the current strategy, but are backed up by solid quantitative evidence. They’re like the detectives of the business world, following the clues and gathering the evidence to make their case. And while the growth opportunities are well-supported and should be feasible, the challenge is getting other organizational members to listen to their advice.


OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

The unusual cooperation comes as AI systems develop new abilities to “think out loud” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they turn into actions. But the researchers warn this transparency is fragile and could vanish as AI technology advances. ... “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. ... When AI models misbehave — exploiting training flaws, manipulating data, or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki, OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote.


Unmasking AsyncRAT: Navigating the labyrinth of forks

We believe that the groundwork for AsyncRAT was laid earlier by the Quasar RAT, which has been available on GitHub since 2015 and features a similar approach. Both are written in C#; however, their codebases differ fundamentally, suggesting that AsyncRAT was not just a mere fork of Quasar, but a complete rewrite. A fork, in this context, is a personal copy of someone else’s repository that one can freely modify without affecting the original project. The main link that ties them together lies in the custom cryptography classes used to decrypt the malware configuration settings. ... Ever since it was released to the public, AsyncRAT has spawned a multitude of new forks that have built upon its foundation. ... It’s also worth noting that DcRat’s plugin base builds upon AsyncRAT and further extends its functionality. Among the added plugins are capabilities such as webcam access, microphone recording, Discord token theft, and “fun stuff”, a collection of plugins used for joke purposes like opening and closing the CD tray, blocking keyboard and mouse input, moving the mouse, turning off the monitor, etc. Notably, DcRat also introduces a simple ransomware plugin that uses the AES-256 cipher to encrypt files, with the decryption key distributed only once the plugin has been requested.


Repatriating AI workloads? A hefty data center retrofit awaits

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says. “As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.” Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack. “Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.” Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says. “By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”


Burnout, budgets and breaches – how can CISOs keep up?

As ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty. This is also true for communication, especially post-breach. Organisations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. ... There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. ... To build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment. But resilience isn’t built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. 


We Spend a Lot of Time Thinking Through the Worst - The Christ Hospital Health Network CISO

“We’ve spent a lot of time meeting with our business partners and talking through, ‘Hey, how would this specific part of the organization be able to run if this scenario happened?’” On top of internal preparations, Kobren shares that his team monitors incidents across the industry to draw lessons from real-world events. Given the unique threat landscape, he states, “We do spend a lot of time thinking through those scenarios because we know it’s one of the most attacked industries.” Moving forward, Kobren says that healthcare consistently ranks at the top when it comes to industries frequently targeted by cyberattacks. He elaborates that attackers have recognized the high impact of disrupting hospital services, making ransom demands more effective because organizations are desperate to restore operations. ... To strengthen identity security, Kobren follows a strong, centralized approach to access control. He mentions that the organization aims to manage “all access to all systems,” including remote and cloud-based applications. By integrating services with single sign-on (SSO), the team ensures control over user credentials: “We know that we are in control of your username and password.” This allows them to enforce password complexity, reset credentials when needed, and block accounts if security is compromised. Ultimately, Kobren states, “We want to be in control of as much of that process as possible” when it comes to identity management.


AI requires mature choices from companies

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies. The reverse is also important: being able to scale back. ... Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments. According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. 


Prompt-Based DevOps and the Reimagined Terminal

New AI-driven CLI tools prove there's demand for something more intelligent in the command line, but most are limited — they're single-purpose apps tied to individual model providers instead of full environments. They are geared towards code generation, not infrastructure and production work. They hint at what's possible, but don't deliver the deeper integration AI-assisted development needs. That's not a flaw, it's an opportunity to rethink the terminal entirely. The terminal's core strengths — its imperative input and time-based log of actions — make it the perfect place to run not just commands, but launch agents. By evolving the terminal to accept natural language input, be more system-aware, and provide interactive feedback, we can boost productivity without sacrificing the control engineers rely on. ... With prompt-driven workflows, they don't have to switch between dashboards or copy-paste scripts from wikis because they simply describe what they want done, and an agent takes care of the rest. And because this is taking place in the terminal, the agent can use any CLI to gather and analyze information from across data sources. The result? Faster execution, more consistent results, and fewer mistakes. That doesn't mean engineers are sidelined. Instead, they're overseeing more projects at once. Their role shifts from doing every step to supervising workflows — monitoring agents, reviewing outputs, and stepping in when human judgment is needed.

Daily Tech Digest - July 15, 2025


Quote for the day:

“Rarely have I seen a situation where doing less than the other guy is a good strategy.” -- Jimmy Spithill


CyberArk: Rise in Machine Identities Poses New Risks

The CyberArk report outlines the substantial business consequences of failing to protect machine identities, leaving organizations vulnerable to costly outages and breaches. Seventy-two percent of organizations experienced at least one certificate-related outage over the past year - a sharp increase compared to prior years. Additionally, 50% reported security incidents or breaches stemming from compromised machine identities. Companies that have experienced non-human identity security breaches include xAI, Uber, Schneider Electric, Cloudflare and BeyondTrust, among others. "Machine identities of all kinds will continue to skyrocket over the next year, bringing not only greater complexity but also increased risks," said Kurt Sand, general manager of machine identity security at CyberArk. "Cybercriminals are increasingly targeting machine identities - from API keys to code-signing certificates - to exploit vulnerabilities, compromise systems and disrupt critical infrastructure, leaving even the most advanced businesses dangerously exposed." ... Fifty percent of security leaders reported security incidents or breaches linked to compromised machine identities in the previous year. These incidents led to delays in application launches for 51% companies, customer-impacting outages for 44% and unauthorized access to sensitive systems for 43%.


What Can Businesses Do About Ethical Dilemmas Posed by AI?

Digital discrimination is a product of bias incorporated into the AI algorithms and deployed at various levels of development and deployment. The biases mainly result from the data used to train the large language models (LLMs). If the data reflects previous iniquities or underrepresents certain social groups, the algorithm has the potential to learn and perpetuate those iniquities. Biases may occasionally culminate in contextual abuse when an algorithm is used beyond the environment or audience for which it was intended or trained. Such a mismatch may result in poor predictions, misclassifications, or unfair treatment of particular groups. Lack of monitoring and transparency merely adds to the problem. In the absence of oversight, biased results are not discovered. ... Human-in-the-loop systems allow intervention in real time whenever AI acts unjustly or unexpectedly, thus minimizing potential harm and reinforcing trust. Human judgment makes choices more inclusive and socially sensitive by including cultural, emotional, or situational elements, which AI lacks. When humans remain in the loop of decision-making, accountability is shared and traceable. This removes ethical blind spots and holds users accountable for consequences.


Beyond the hype: AI disruption in India’s legal practice

The competitive dynamics are stark. When AI can complete a ten-hour task in two hours, firms face a pricing paradox: how to maintain profitability while passing efficiency gains to the clients? Traditional hourly billing models become unsustainable when the underlying time economics change dramatically. ... Effective AI integration hinges on a strong technological foundation, encompassing secure data architecture, advanced cybersecurity measures and a seamless and hassle-free interoperability between systems and already existing platforms. SAM’s centralised Harvey AI approach and CAM’s multi-tool strategy both imply significant investment in these backend capabilities. ... Merely automating existing workflows fails to leverage AI’s transformative potential. To unlock AI’s full transformative value, firms must rethink their legal processes – streamlining tasks, reallocating human resources to higher order functions and embedding AI at the core of decision-making processes and document production cycles. ... AI enables alternative service models that go beyond the billable hour. Firms that rethink on how they can price say, by offering subscription-based or outcome-driven services, and position themselves as strategic partners rather than task executors, will be best positioned to capture long-term client value in an AI-first legal economy.


‘Chronodebt’: The lose/lose situation few CIOs can escape

One needn’t be an expert in the field of technical architecture to know that basing a capability as essential as air traffic control on such obviously obsolete technology is a bad idea. Someone should lose their job over this. And yet, nobody has lost their job over this, nor should they have. That’s because the root cause of the FAA’s woes — poor chronodebt management, in case you haven’t been paying attention — is a discipline that’s rarely tracked by reliable metrics and almost-as-rarely budgeted for. Metrics first: While the discipline of IT project estimation is far from reliable, it’s good enough to be useful in estimating chronodebt’s remediation costs — in the FAA’s case what it would have to spend to fix or replace its integrations and the integration platforms on which those integrations rely. That’s good enough, with no need for precision. Those running the FAA for all these years could, that is, estimate the cost of replacing the programs used to export and update its repositories, and replacing the 3 ½” diskettes and paper strips on which they rely. But, telling you what you already know, good business decisions are based not just on estimated costs, but on benefits netted against those costs. The problem with chronodebt is that there are no clear and obvious ways to quantify the benefits to be had by reducing it.


Can System Initiative fix devops?

System Initiative turns traditional devops on its head. It translates what would normally be infrastructure configuration code into data, creating digital twins that model the infrastructure. Actions like restarting servers or running complex deployments are expressed as functions, then chained together in a dynamic, graphical UI. A living diagram of your infrastructure refreshes with your changes. Digital twins allow the system to automatically infer workflows and changes of state. “We’re modeling the world as it is,” says Jacob. For example, when you connect a Docker container to a new Amazon Elastic Container Service instance, System Initiative recognizes the relationship and updates the model accordingly. Developers can turn workflows — like deploying a container on AWS — into reusable models with just a few clicks, improving speed. The GUI-driven platform auto-generates API calls to cloud infrastructure under the hood. ... An abstraction like System Initiative could embrace this flexibility while bringing uniformity to how infrastructure is modeled and operated across clouds. The multicloud implications are especially intriguing, given the rise in adoption of multiple clouds and the scarcity of strong cross-cloud management tools. A visual model of the environment makes it easier for devops teams to collaborate based on a shared understanding, says Jacob — removing bottlenecks, speeding feedback loops, and accelerating time to value.


An exodus evolves: The new digital infrastructure market

Regulatory pressures have crystallised around concerns over reliance on a small number of US-based cloud providers. With some hyperscalers openly admitting that they cannot guarantee data stays within a jurisdiction during transfer, other types of infrastructure make it easier to maintain compliance with UK and EU regulations. This is a clear strategy to avoid future financial and reputational damage. ... 2025 is a pivotal year for digital infrastructure. Public cloud will remain an essential part of the IT landscape. But the future of data strategy lies in making informed, strategic decisions, leveraging the right mix of infrastructure solutions for specific workloads and business needs. As part of our research, we assessed the shape of this hybrid market. ... With one eye to the future, UK-based cloud providers must be positioned as a strategic advantage, offering benefits such as data sovereignty, regulatory compliance, and reduced latency. Businesses will need to situate themselves ever more precisely on the spectrum of digital infrastructure. Their location will reflect how they embrace a hybrid model that balances public cloud, private cloud, colocation and on-premise options. This approach will not only optimise performance and costs but also provide long-term resilience in an evolving digital economy.


How Trump's Cyber Cuts Dismantle Federal Information Sharing

"The budget cuts, personnel reductions and other policy changes have decreased the volume and frequency of CISA's information sharing activities in both formal and informal channels," Daniel told ISMG. While sector-specific ISACs still share information, threat sharing efforts tied to federal funding - such as the Multi-State ISAC, which supports state and local governments - "have been negatively affected," he said . One former CISA staffer who recently accepted the administration's deferred resignation offer told ISMG the agency's information-sharing efforts "were among the first to take a hit" from the administration's cuts, with many feeling pressured into silence. ... Analysts have also warned that cuts to cyber staff across federal agencies and risks to initiatives including the National Vulnerability Database and Common Vulnerabilities and Exposures program could harm cybersecurity far beyond U.S. borders. The CVE program is dealing with backlogs and a recent threat to shut down funding over a federal contracting issue. Failure of the CVE Program "would have wide impacts on vulnerability management efficiency and effectiveness globally," said John Banghart, senior director for cybersecurity services at Venable and a key architect of the Obama administration's cybersecurity policy as a former director for federal cybersecurity for the National Security Council.


Securing vehicles as they become platforms for code and data

Recently security researchers have demonstrated real-world attacks against connected cars, such as wireless brake manipulation on heavy trucks by spoofing J-bus diagnostic packets. Another very recent example is successful attacks against autonomous car LIDAR systems. While the distribution of EV and advanced cars becomes more pervasive across our society, we expect these types of attacks and methods to continue to grow in complexity. Which makes a continuous, real-time approach to securing the entire ecosystem (from charger, to car, to driver) even more so important. ... Over-the-air (OTA) update hijacking is very real and often enabled by poor security design, such as lack of encryption, improper authentication between the car and backend, and lack of integrity or checksum validation. Attack vectors that the traditional computer industry has dealt with for years are now becoming a harsh reality in the automotive sector. Luckily, many of the same approaches used to mitigate these risks in IT can also apply here ... When we look at just the automobile, we have a variety of connected systems which typically all come from different manufacturers (Android Automotive, or QNX as examples) which increases the potential for supply chain abuse. We also have devices which the driver introduces which interacts with the car’s APIs creating new entry points for attackers.


Strategizing with AI: How leaders can upgrade strategic planning with multi-agent platforms

Building resiliency and optionality into a strategic plan challenges humans’ cognitive (and financial) bandwidth. The seemingly endless array of future scenarios, coupled with our own human biases, conspires to anchor our understanding of the future in what we’ve seen in the past. Generative AI (GenAI) can help overcome this common organizational tendency for entrenched thinking, and mitigate the challenges of being human, while exploiting LLMs’ creativity as well as their ability to mirror human behavioral patterns. ... In fact, our argument reflects our own experience using a multi-agent LLM simulation platform built by the BCG Henderson Institute. We’ve used this platform to mirror actual war games and scenario planning sessions we’ve led with clients in the past. As we’ve seen firsthand, what makes an LLM multi-agent simulation so powerful is the possibility of exploiting two unique features of GenAI—its anthropomorphism, or ability to mimic human behavior, and its stochasticity, or creativity. LLMs can role-play in remarkably human-like fashion: Research by Stanford and Google published earlier this year suggests that LLMs are able to simulate individual personalities closely enough to respond to certain types of surveys with 85% accuracy as the individuals themselves.


The Network Challenges of IoT Integration

IoT interoperability and compatible security protocols are a particular challenge. Although NIST and ISO, among other organizations, have issued IoT standards, smaller IoT manufacturers don't always have the resources to follow their guidance. This becomes a network problem because companies have to retool these IoT devices before they can be used on their enterprise networks. Moreover, because many IoT gadgets are delivered with default security settings that are easy to undo, each device has to be hand-configured to ensure it meets company security standards. To avoid potential interoperability pitfalls, network staff should evaluate prospective technology before anything is purchased. ... First, to achieve high QoS, every data pipeline on the network must be analyzed -- as well as every single system, application and network device. Once assessed, each component must be hand-calibrated to run at the highest performance levels possible. This is a detailed and specialized job. Most network staff don't have trained QoS technicians on board, so they must go externally for help. Second, which areas of the business get maximum QoS, and which don't? A medical clinic, for example, requires high QoS to support a telehealth application where doctors and patients communicate. 

Daily Tech Digest - July 12, 2025


Quote for the day:

"If you do what you’ve always done, you’ll get what you’ve always gotten." -- Tony Robbins


Why the Value of CVE Mitigation Outweighs the Costs

When it comes to CVEs and continuous monitoring, meeting compliance requirements can be daunting and confusing. Compliance isn’t just achieved; rather, it is a continuous maintenance process. Compliance frameworks might require additional standards, such as Federal Information Processing Standards (FIPS), Federal Risk and Authorization Management Program (FedRAMP), Security Technical Implementation Guides (STIGs) and more that add an extra layer of complexity and time spent. The findings are clear. Telecommunications and infrastructure companies reported an average of $3 million in new revenue annually by improving their container security enough to qualify for security-sensitive contracts. Healthcare organizations averaged $7.3 million in new revenue, often driven by unlocking expansion into compliance-heavy markets. ... The industry has long championed “shifting security left,” or embedding checks earlier in the pipeline to ensure security measures are incorporated throughout the entire software development life cycle. However, as CVE fatigue worsens, many teams are realizing they need to “start left.” That means: Using hardened, minimal container images by default; Automating CVE triage and patching through reproducible builds; Investing in secure-by-default infrastructure that makes vulnerability management invisible to most developers


Generative AI: A Self-Study Roadmap

Building generative AI applications requires comfort with Python programming and basic machine learning concepts, but you don't need deep expertise in neural network architecture or advanced mathematics. Most generative AI work happens at the application layer, using APIs and frameworks rather than implementing algorithms from scratch. ... Modern generative AI development centers around foundation models accessed through APIs. This API-first approach offers several advantages: you get access to cutting-edge capabilities without managing infrastructure, you can experiment with different models quickly, and you can focus on application logic rather than model implementation. ... Generative AI applications require different API design patterns than traditional web services. Streaming responses improve user experience for long-form generation, allowing users to see content as it's generated. Async processing handles variable generation times without blocking other operations. ... While foundation models provide impressive capabilities out of the box, some applications benefit from customization to specific domains or tasks. Consider fine-tuning when you have high-quality, domain-specific data that foundation models don't handle well—specialized technical writing, industry-specific terminology, or unique output formats requiring consistent structure.


Announcing GenAI Processors: Build powerful and flexible Gemini applications

At its core, GenAI Processors treat all input and output as asynchronous streams of ProcessorParts (i.e. two-way aka bidirectional streaming). Think of it as standardized data parts (e.g., a chunk of audio, a text transcription, an image frame) flowing through your pipeline along with associated metadata. This stream-based API allows for seamless chaining and composition of different operations, from low-level data manipulation to high-level model calls. ... We anticipate a growing need for proactive LLM applications where responsiveness is critical. Even for non-streaming use cases, processing data as soon as it is available can significantly reduce latency and time to first token (TTFT), which is essential for building a good user experience. While many LLM APIs prioritize synchronous, simplified interfaces, GenAI Processors – by leveraging native Python features – offer a way for writing responsive applications without making code more complex. ... GenAI Processors is currently in its early stages, and we believe it provides a solid foundation for tackling complex workflow and orchestration challenges in AI applications. While the Google GenAI SDK is available in multiple languages, GenAI Processors currently only support Python.


Scaling the 21st-century leadership factory

Identifying priority traits is critical; just as important, CEOs and their leadership teams must engage early and often with high-potential employees and unconventional thinkers in the organization, recognizing that innovation often comes from the edges of the business. Skip-level meetings are a powerful tool for this purpose. Most famously, Apple’s Steve Jobs would gather what he deemed the 100 most influential people at the company, including young engineers, to engage directly in strategy discussions—regardless of hierarchy or seniority. ... A culture of experimentation and learning is essential for leadership development—but it must be actively pursued. “Instillation of personal initiative, aggressiveness, and risk-taking doesn’t spring forward spontaneously,” General Jim Mattis explained in his 2019 book on leadership, Call Sign Chaos. “It must be cultivated for years and inculcated, even rewarded, in an organization’s culture. If the risk-takers are punished, then you will retain in your ranks only the risk averse,” he wrote. ... There are multiple ways to streamline decision-making, including redefining decision rights to focus on a handful of owners and distinguishing between different types of decisions, as not all choices are high stakes. 


Lessons learned from Siemens’ VMware licensing dispute

Siemens threatened to sue VMware if it didn’t provide ongoing support for the software and handed over a list of the software it was using that it wanted support for. Except that the list included software that it didn’t have any licenses for, perpetual or otherwise. Broadcom-owned VMware sued, Siemens countersued, and now the companies are battling over jurisdiction. Siemens wants the case to be heard in Germany, and VMware prefers the United States. Normally, if unlicensed copies of software are discovered during an audit, the customer pays the difference and maybe an additional penalty. After all, there are always minor mistakes. The vendors try to keep these costs at least somewhat reasonable, since at some point, customers will migrate from mission-critical software if the pain is high enough. ... For large companies, it can be hard to pivot quickly. Using open-source software can help reduce the risk of unexpected license changes, and, for many major tools there are third-party service providers that can offer ongoing support. Another option is SaaS software, Ringdahl says, because it does make license management a bit easier, since there’s usually transparency both for the customer and the vendor about how much usage the product is getting.


Microsoft says regulations and environmental issues are cramping its Euro expansion

One of the things that everyone needs to consider is how datacenter development in Europe is being enabled or impeded, Walsh said. "Because we have moratoriums coming at us. We have communities that don't want us there," she claimed, referring particularly to Ireland where local opposition to bit barns has been hardening because of the amount of electricity they consume and their environmental impact. Another area of discussion at the Datacloud keynote was the commercial models for acquiring datacenter capacity, which it was felt had become unfit for the new environment where large amounts are needed quickly. "From our perspective, time to market is essential. We've done a lot of leasing in the last two years, and that is all time for market pressure," Walsh said. "I also manage land acquisition and land development, which includes permitting. So the joy of doing that is that when my permits are late, I can lease so I can actually solve my own problems, which is amazing, but the way things are going, it's going to be very difficult to continue to lease the infrastructure using co-location style funding. It's just getting too big, and it's going to get harder and harder to get up the chain, for sure," she explained. ... "European regulations and planning are very slow, and things take 18 months longer than anywhere else," she told attendees at <>Bisnow's Datacenter Investment Conference and Expo (DICE) in Ireland.


350M Cars, 1B Devices Exposed to 1-Click Bluetooth RCE

The scope of affected systems is massive. The developer, OpenSynergy, proudly boasts on its homepage that Blue SDK — and RapidLaunch SDK, which is built on top of it and therefore also possibly vulnerable — has been shipped in 350 million cars. Those cars come from companies like Mercedes-Benz, Volkswagen, and Skoda, as well as a fourth known but unnamed company. Since Ford integrated Blue SDK into its Android-based in-vehicle infotainment (IVI) systems in November, Dark Reading has reached out to determine whether it too was exposed. ... Like any Bluetooth hack, the one major hurdle in actually exploiting these vulnerabilities is physical proximity. An attacker would likely have to position themselves within around 10 meters of a target device in order to pair with it, and the device would have to comply. Because Blue SDK is merely a framework, different devices might block pairing, limit the number of pairing requests an attacker could attempt, or at least require a click to accept a pairing. This is a point of contention between the researchers and Volkswagen. ... "Usually, in modern cars, an infotainment system can be turned on without activating the ignition. For example, in the Volkswagen ID.4 and Skoda Superb, it's not necessary," he says, though the case may vary vehicle to vehicle. 


Leaders will soon be managing AI agents – these are the skills they'll need, according to experts

An AI agent is essentially just "a piece of code", says Jarah Euston, CEO and Co-Founder of AI-powered labour platform WorkWhile, which connects frontline workers to shifts. "It may not have the same understanding, empathy, awareness of the politics of your organization, of the fears or concerns or ambitions of the people around that it is serving. "So managers have to be aware that the agent is only as good as how you've trained it. I don't think we're close yet to having agents that can operate without any human oversight. "As a manager, you want to leverage the AI to make you and your team more productive, but you constantly have to be checking, iterating and training your tools to get the most out of them."  ... Technological skills are expected to become increasingly vital over the next five years, outpacing the growth of all other skill categories. Leading the way are AI and big data, followed closely by networking, cybersecurity and overall technological literacy. The so-called 'soft skills' of creative thinking and resilience, flexibility and agility are also rising in importance, along with curiosity and lifelong learning. Empathy is one skill AI agents can't learn, says Women in Tech's Moore Aoki, and she believes this will advantage women.


Common Master Data Management (MDM) Pitfalls

In addition to failing to connect MDM’s value with business outcomes, “People start with MDM by jumping in with the technology,” Cooper said. “Then, they try to fit the people, processes, and master data into their selected technology.” Moreover, in the process of prioritizing technology first, organizations take for granted that they have good data quality, data that is clean and fit for purpose. Then, during a major initiative, such as migrating to a cloud environment, they discover their data is not so clean. ... Organizations fall into the pitfalls above and others because they try to do it alone, and most have never done MDM before. Instead, “Organizations have different capabilities with MDM,” said Cooper, “and you don’t know what you don’t know.” ... Connecting the MDM program to business objectives requires talking with the stakeholders across the organization, especially divisions with direct financial risks such as sales, marketing, procurement, and supply. Cooper said readers should learn the goals of each unit and how they measure success in growing revenue, reducing cost, mitigating risk, or operating more efficiently. ... Cooper advised focusing on data quality – e.g., through reference data – rather than technology. In the figure below, a company has data about a client, Emerson Electric, as shown on the left. 


Why Cloud Native Security Is More Complex Than You Think

Enterprise security tooling can help with more than just the monitoring of these vulnerabilities though. And, often older vulnerabilities that have been patched by the software vendor will offer “fix status” advice. This is where a specific package version is shown to the developer or analyst responsible for remediating the vulnerability. When they upgrade the current package to that later version, the vulnerability alert will be resolved. To confuse things further, the applications running in containers or serverless functions also need to be checked for non-compliance. Warnings that may be presented by security tooling when these applications are checked against recognised compliance standards, frameworks or benchmarks for noncompliance are wide and varied. For example, if a serverless function has overly permissive access to another cloud service and an attacker gets access to the serverless function’s code via a vulnerability, the attack’s blast radius could exponentially increase as a result. Or, often compliance checks reveal how containers are run with inappropriate network settings. ... At a high level, these components and importantly, how they interact with each other, is why applications running in the cloud require time, effort and specialist expertise to secure them.

Daily Tech Digest - July 11, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you them feel." -- Mary Kay Ash


Throwing AI at Developers Won’t Fix Their Problems

Organizations are spending too much time, money and energy focusing on the tools themselves. “Should we use OpenAI or Anthropic? Copilot or Cursor?” We see two broad patterns for how organizations approach AI tool adoption. The first is that leadership has a relationship with a certain vendor or just a personal preference, so they pick a tool and mandate it. This can work, but you’ll often get poor results — not because the tool is bad, but because the market is moving too fast for centralized teams to keep up. ... The second model, which generally works much better, is to allow early adopters to try new tools and find what works. This gives developers autonomy to improve their own workflows and reduces the need for a central team to test every new tool exhaustively. Comparing the tools by features or technology is less important every day. You’ll waste a lot of energy debating minor differences that won’t matter next year. Instead, focus on what problem you want to solve. Are you trying to improve testing? Code review? Documentation? Incident response? Figure out the goal first. Then see if an AI tool (or any tool) actually helps. If you don’t, you’ll just make DevEx worse: You’ll have a landscape of 100 tools nobody knows how to use, and you’ll deliver no real value.


Anatomy of a Scattered Spider attack: A growing ransomware threat evolves

Scattered Spider began its attack against the unnamed organization’s public-facing Oracle Cloud authentication portal, targeting its chief financial officer. Using personal details, such as the CFO’s date of birth and the last four digits of their Social Security number obtained from public sources and previous breaches, Scattered Spider impersonated the CFO in a call to the company’s help desk, tricking help desk staff into resetting the CFO’s registered device and credentials. ... The cybercriminals extracted more than 1,400 secrets by taking advantage of compromised admin accounts tied to the target’s CyberArk password vault and likely an automated script. Scattered Spider granted administrator roles to compromised user accounts before using tools, including ngrok, to maintain access on compromised virtual machines. ... Scattered Spider’s operations have become more aggressive and compressed. “Within hours of initial compromise — often via social engineering — they escalate privileges, move laterally, establish persistence, and begin reconnaissance across both cloud and on-prem environments,” Beek explained. “This speed and fluidity represent a significant escalation in operational maturity.” ... Defending effectively against Scattered Spider involves tackling both human and technical vulnerabilities, ReliaQuest researchers noted.


Data governance: The contract layer that makes agentic systems possible

Today, AI has changed everything. Lineage, access enforcement and cataloging must operate in real time and cover vastly more data types and sources. Models consume data continuously and make decisions instantly, raising the stakes for mistakes or gaps in oversight. What used to be a once-a-week check is now an always-on discipline. This transformation has turned data governance from a checklist into a living system that protects quality and trust at scale. ... One of the biggest misconceptions is that governance slows down innovation. In reality, good governance speeds it up. By clarifying ownership, policies and data quality from the start, teams avoid spending precious time reconciling mismatches and can focus on delivering AI that works as intended. A clear governance framework reduces unnecessary data copies, lowers regulatory risk and prevents AI from producing unpredictable results. Getting this right also requires a culture shift. Producers and consumers alike need to see themselves as co-stewards of shared data products. ... Enterprises deploying agentic AI cannot leave governance behind. These systems run continuously, make autonomous decisions and rely on accurate context to stay relevant. Governance must move from passive checks to an active, embedded foundation within both architecture and culture.


How CIOs Are Navigating Today’s Hyper Volatility

“When it comes to changing dynamics, [such as] AI and driving innovation, there are several things that people like me are dealing with right now. There is an impact on how you hire people, staffing, how to structure your organization,” says Johar. “There is an impact on risk. I’m also responsible within my organization for managing the risk of data, privacy and security, and AI is bringing a new dimension to that risk. It’s an opportunity, but it's also a risk. How you structure your organization, how you manage risk, how you drive transformation -- these things are all connected.” ... “[CIOs] are emerging as transformation leaders, so they need to understand how to navigate the culture change of an organization, the change in people in an organization. They must know how to tell stories so they can get the organization on board,” says Danielle Phaneuf, a partner, PwC cloud and digital strategy operating model leader. “Their mindset is different, so they're embracing the transformation with a product model that allows them to move faster [and] allows them to think long term. They’re building these new muscles around change leadership and engaging the business early, co-creating solutions, not thinking they must solve everything on their own, and doing that in an agile way.”


What Is AI Agent Washing And Why Is It A Risk To Businesses?

You’ve heard of greenwashing and AI-washing? Well, now it seems that the hype-merchants and bandwagon-jumpers with technology to sell have come up with a new (and perhaps predictably inevitable) scam. Analysts at Gartner say unscrupulous vendors are increasingly engaging in "agent washing" and say that out of “thousands” of supposedly agentic AI products tested, only 130 truly lived up to the claim. ... So, what’s the scam? Well, according to the report, agent washing involves passing off existing automation technology, including LLM-powered chatbots and robotic process automation, as agentic, when in reality it lacks those capabilities. ... Tools that claim to be agentic because they orchestrate and pull together multiple AI systems, such as marketing automation platforms and workflow automation tools, are stretching the term, too, unless they are also capable of autonomously coordinating the usage of those tools for long-term planning and decision-making. A few more hypothetical examples: While an AI chatbot-based system can write emails on command, an agentic system might write emails, identify the best recipients for marketing purposes, send the emails out, monitor responses, and then generate follow-up emails, tailored to individual responders.


Agentic AI Architecture Framework for Enterprises

The critical decision point lies in understanding when predictability and control take precedence versus when flexibility and autonomous decision-making deliver greater value. This understanding leads to a fundamental principle: start with the simplest effective solution, adding complexity only when clear business value justifies the additional operational overhead and risk. ... Enterprise deployment of agentic AI creates an inherent tension between AI autonomy and organizational governance requirements. Our Analysis of successful MVPs and on-going production implementations across multiple industries reveals three distinct architectural tiers, each representing different trade-offs between capability and control while anticipating emerging regulatory frameworks like the EU AI Act and others coming. These tiers form a systematic maturity progression, so organizations can build competency and stakeholder trust incrementally before advancing to more sophisticated implementations. ... Our three-tier progression manifests differently across industries, reflecting unique regulatory environments, risk tolerances, customer expectations and operational requirements. Understanding these industry-specific approaches enables organizations to tailor their implementation strategies while maintaining systematic capability development.


Rewriting the rules of enterprise architecture with AI agents

In enterprise architecture, agentic AI systems can be deployed as digital “co-architects”, process optimizers, compliance monitors and scenario planners — each acting with a degree of independence and intelligence previously impossible. So why agentic AI and simulations for governance…and why now? Governance in enterprise architecture is about ensuring that IT systems, processes and data align with business goals, comply with regulations and adapt to change. ... These methods are increasingly inadequate in the face of real-time business dynamics. Agentic AI introduces a new composability model that is achievable: Governance that is continuous, adaptive and proactive. Agentic systems can monitor the enterprise landscape, simulate the impact of changes, enforce policies autonomously and even resolve conflicts or escalate issues when necessary. This results in governance that is both more robust and more responsive to business needs. Gartner’s research reinforces the impact of agency and simulations on enterprise architecture’s future. According to its Enterprise Architecture Services Predictions for 2025, 55% of EA teams will act as coordinators of autonomous governance automation by 2028 and shift from a direct oversight role to that of model curation and certification, agent simulations and oversight, and business outcome alignment with machine-led governance.


With tools like Alpha and Coherence, we’re turning risk management from reactive to real-time

Those days when it was more of a very reactive and process-heavy system, where you had to follow a set of dilutive processes all the time and react to risks being observed in the system, and then you had a standard operating procedure to deal with it step by step. Those days are behind us. That scenario was there for a number of decades. But with AI and intelligent-led solution capabilities transforming the landscape, it has become proactive and extremely real-time. So what we propose, we always have lived by our Digital Knowledge Operations framework. The three words in it: digital, knowledge, and operations. Digital makes you proactive because you’re building solutions not for today but for the future. You rely on knowledge, and you transform your operations. That’s our philosophy that unlocks this proactive ability of capturing the possibilities of risk in real time. That drove us to build something like Alpha. It’s essentially a very strong and effective transaction monitoring framework and tool that can detect a whole lot of false alerts with over 75% to 80% accuracy. Now, in risk management, what happens is that a lot of operational bandwidth, effort, and talent capability is lost in assessing all of these false positives that are generated because of risk management procedures. Most of them can be taken care of by a combination of machine learning, artificial intelligence, and some sort of robotics.


Banking on Better Data: Why Financial Institutions Need an Agile Cloud Strategy

The urgency to migrate to the cloud is particularly pronounced in the banking sector, where legacy institutions are under mounting pressure to keep pace with digital-native competitors. These agile challengers can roll out new features in a matter of weeks, while traditional banks remain constrained by older mainframes. It is clear that the risk of standing still is no longer theoretical. Earlier this year, over 1.2 million UK customers experienced banking outages on pay day, a critical moment for both individuals and businesses. Several major retail banks reported widespread issues, including login failures and prolonged delays in customer service. Far from being one-off glitches, these disruptions point to a broader pattern of structural fragility rooted in outdated technology. Unlike legacy systems, cloud-native platforms are engineered for adaptability, resilience, and real-time performance, which are traits that traditional banking environments have been struggling to deliver. These failures weren’t just accidents; they were foreseeable outcomes of prolonged underinvestment in modernization. This reinforced a critical truth for traditional banks, which is that cloud transformation is no longer a future aspiration, but an immediate requirement to safeguard customer trust and remain viable in a rapidly evolving market.


Why knowledge is the ultimate weapon in the Information Age

To turn AI into an asset rather than a liability, organisations must rethink their approach to knowledge management. At its core, knowledge management is a learning cycle centred on people, with technology acting as a force multiplier, not a substitute for judgment. The objective is to establish a virtuous loop in which data is collected, validated, and transformed into actionable insight. The tighter and more disciplined this cycle, the higher the quality of the resulting knowledge. In practice, this means treating AI as just another tool in the toolkit. ... In an age of information warfare, perception is the battleground. To stay ahead, decision-makers must be trained not just in AI tools but in understanding their strengths, limitations, and potential biases, including their own. The ability to critically assess AI-generated content is essential, not optional. More than static planning, modern organisations need situational awareness and strategic agility, embedding AI within a human-centric knowledge strategy. We can shift the balance in the information war by curating trusted sources, rigorously verifying content, and sustaining a culture of learning. This new knowledge ecosystem embraces uncertainty, leverages AI wisely, and keeps cognitive bias in control, wielding knowledge as a disciplined and secure strategic asset.

Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle.