Showing posts with label AI Cloud. Show all posts
Showing posts with label AI Cloud. Show all posts

Daily Tech Digest - September 24, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


Managing Technical Debt the Right Way

Here’s the uncomfortable truth: most executives don’t care about technical purity, but they do care about value leakage. If your team can’t deliver new features fast enough, if outages are too frequent, if security holes are piling up, that is financial debt—just wearing a hoodie instead of a suit. The BTABoK approach is to make debt visible in the same way accountants handle real liabilities. Use canvases, views, and roadmaps to connect the hidden cost of debt to business outcomes. Translate debt into velocity lost, time to market, and risk exposure. Then prioritize it just like any other investment. ... If your architects can’t tie debt decisions to value, risk, and strategy, then they’re not yet professionals. Training and certification are not about passing an exam. They are about proving you can handle debt like a surgeon handles risk—deliberately, transparently, and with the trust of society. ... Let’s not sugarcoat it: some executives will always see debt as “nerd whining.” But when you put it into the lifecycle, into the transformation plan, and onto the balance sheet, it becomes a business issue. This is the same lesson learned in finance: debt can be a powerful tool if managed, or a silent killer if ignored. BTABoK doesn’t give you magic bullets. It gives you a discipline and a language to make debt a first-class concern in architectural practice. The rest is courage—the courage to say no to shortcuts that aren’t really shortcuts, to show leadership the cost of delay, and to treat architectural decisions with the seriousness they deserve.


How National AI Clouds Undermine Democracy

The rapid spread of sovereign AI clouds unintentionally creates a new form of unchecked power. It combines state authority with corporate technology in unclear public-private partnerships. This combination centralizes surveillance and decision-making power, extending far beyond effective democratic oversight. The pursuit of national sovereignty undermines the civic sovereignty of individuals. ... The unique and overlooked danger is the rise of a permanent, unelected techno-bureaucracy. Unlike traditional government agencies, these hybrid entities are shielded from democratic pressures. Their technical complexity acts as a barrier against public understanding and journalistic inquiry. ... no sovereign cloud should operate without a corresponding legislative data charter. This charter, passed by the national legislature, must clearly define citizens' rights against algorithmic discrimination, set explicit limits on data use, and create transparent processes for individuals harmed by the system. It should recognize data portability as an essential right, not just a technical feature. ... every sovereign AI initiative should be mandated to serve the public good. These systems must legally demonstrate that they fulfill publicly defined goals, with their performance measured and reported openly. This directs the significant power of AI toward applications that benefit the public, such as enhancing healthcare outcomes or building climate resilience.


IT’s renaissance risks losing steam

IT-enabled value creation will etiolate without the sustained light of stakeholder attention. CIOs need to manage IT signals, symbols, and suppositions with an eye toward recapturing stakeholder headspace. Every IT employee needs to get busy defanging the devouring demons of apathy and ignorance surrounding IT operations today. ... We need to move beyond our “hero on horseback” obsession with single actors. Instead we need to return our efforts forcefully to l’histoire des mentalités — the study of the mental universe of ordinary people. How is l’homme moyen sensual (the man on the street) dealing with the technological choices arrayed before him? ... The IT pundits’ much discussed promise of “technology transformation” will never materialize if appropriate exothermic — i.e., behavior-inducing and energy creating — IT ideas have no mass following among those working at the screens around the world. ... As CIO, have you articulated a clear vision of what you want IT to achieve during your tenure? Have you calmed the anger of unmet expectations, repaired the wounds of system outages, alleviated the doubts about career paths, charted a filled-with-benefits road forward and embodied the hopes of all stakeholders? ... The cognitive elephant in the room that no one appears willing to talk about is the widespread technological illiteracy of the world’s population. 


How One Bad Password Ended a 158-Year-Old Business

KNP's story illustrates a weakness that continues to plague organizations across the globe. Research from Kaspersky analyzing 193 million compromised passwords found that 45% could be cracked by hackers within a minute. And when attackers can simply guess or quickly crack credentials, even the most established businesses become vulnerable. Individual security lapses can have organization-wide consequences that extend far beyond the person who chose "Password123" or left their birthday as their login credential. ... KNP's collapse demonstrates that ransomware attacks create consequences far beyond an immediate financial loss. Seven hundred families lost their primary income source. A company with nearly two centuries of history disappeared overnight. And Northamptonshire's economy lost a significant employer and service provider. For companies that survive ransomware attacks, reputational damage often compounds the initial blow. Organizations face ongoing scrutiny from customers, partners, and regulators who question their security practices. Stakeholders seek accountability for data breaches and operational failures, leading to legal liabilities. ... KNP joins an estimated 19,000 UK businesses that suffered ransomware attacks last year, according to government surveys. High-profile victims have included major retailers like M&S, Co-op, and Harrods, demonstrating that no organization is too large or established to be targeted.


Has the UK’s Cyber Essentials scheme failed?

There are several reasons why larger organisations may steer clear of CE in its current form, explains Kearns. “They typically operate complex, often geographically dispersed networks, where basic technical controls driven by CE do not satisfy organisational appetite to drive down risk and improve resilience,” she says. “The CE control set is also ‘absolute’ and does not allow for the use of compensating controls. Large complex environments, on the other hand, often operate legacy systems that require compensating controls to reduce risk, which prevents compliance with CE.” The point-in-time nature of assessment is also a poor fit for today’s dynamic IT infrastructure and threat environments, argues Pierre Noel, field CISO EMEA at security vendor Expel. ... “For large enterprises with complex IT environments, CE may not be comprehensive enough to address their specific security needs,” says Andy Kays, CEO of MSSP Socura. “Despite these limitations, it still serves a valuable purpose as a baseline, especially for supply chain assurance where larger companies want to ensure their smaller partners have a minimum level of security.” Richard Starnes is an experienced CISO and chair of the WCIT security panel. He agrees that large enterprises should require CE+ certification in their supplier contracts, where it makes sense. “This requirement should also include a contract flow-down to ensure that their suppliers’ downstream partners are also certified,” says Starnes.


Is Your Data Generating Value or Collecting Digital Dust?

Economic uncertainty is prompting many com­panies to think about how to do more with less. But what if they’re actually positioned to do more with more and just don’t realize it? Many organizations already have the resources they need to improve efficiency and resilience in challenging times. Close to two-thirds of organi­zations manage 1 petabyte or more of data, which represents enough data to cover 500 billion standard pages of text. More than 40% of companies store even more data. Much of that data sits unanalyzed while it incurs costs related to collection, compliance, and storage. It also poses data breach risks that require expensive security measures to prevent. ... Engaging with too many apps often makes employees less efficient than they could be. In 2024, companies used an average of 21 apps just for HR tasks. Multiply that across different functions, and it’s easy to see how finding ways to reduce the total could bring down costs. Trimming the number of apps can also increase productivity by reducing employee overwhelm. Constantly switching between different apps and systems has been shown to distract employees while increasing their levels of stress and frustration. Across the orga­nization, switching among tasks and apps consumes 9% of the average employee’s time at work by chipping away at their atten­tion and ability to focus a few seconds at a time with each of the hundreds of tasks switches they perform every day.


The history and future of software development

For any significant piece of software back then, you needed stacks of punch cards. Yes, 1000 lines of code needed 1000 cards. And you needed to have them in order. Now, imagine dropping that stack of 1000 cards! It would take me ages to get them back in order. Devs back then experienced this a lot—so some of them went ahead and had creative ways of indicating the order of these cards. ... y the mid 1970s affordable home computers were starting to become a reality. Instead of a computer just being a work thing, hobbyists started using computers for personal things—maybe we can call these, I don't know...personal computers. ... Assembler and assembly tend to be used interchangeably. But are in reality two different things. Assembly would be the actual language, syntax—instructions being used and would be tightly coupled to the architecture. While the assembler is the piece of software that assembles your assembly code into machine code—the thing your computer knows how to execute. ... What about writing the software? Did they use git back then? No, git only came out in 2005, so back then software version control was quite the manual effort. From developers having their own way of managing source code locally to even having wall charts where developers can "claim" ownership of certain source code files. For those that were able to work on a shared (multi-user) system, or have an early version of some networked storage—Source code sharing was as easy as handing out floppy disks.


Why the operating system is no longer just plumbing

Many enterprises still think of the operating system as a “static” or background layer that doesn’t need active evolution. The reality is that modern operating systems like Red Hat Enterprise Linux (RHEL) are dynamic, intelligent platforms that actively enable and optimize everything running on top of them. Whether you're training AI models, deploying cloud-native applications, or managing edge devices, the OS is making thousands of critical decisions every second about resource allocation, security enforcement, and performance optimization. ... With image mode deployments, zero-downtime updates, and optimized container support, RHEL ensures that even resource-constrained environments can maintain enterprise-grade reliability. We’ve also focused heavily on security—confidential computing, quantum-resistant cryptography, and compliance automation—because edge environments are often exposed to greater risk. These choices allow RHEL to deliver resilience in conditions where compute power, space, and connectivity are limited. ... We don't just take community code and ship it — we validate, harden, and test everything extensively. Red Hat bridges this gap by being an active contributor upstream while serving as an enterprise-grade curator downstream. Our ecosystem partnerships ensure that when new technologies emerge, they work reliably with RHEL from day one.


Ransomware now targeting backups, warns Google’s APAC security chief

Backups often contain sensitive data such as personal information, intellectual property, and financial records. Pereira warned that attackers can use this data as extra leverage or sell it on the dark web. The shift in focus to backup systems underscores how ransomware has become less about disruption and more about business pressure. If an organisation cannot restore its systems independently, it has little choice but to consider paying a ransom. ... Another troubling trend is “cloud-native extortion,” where attackers abuse built-in cloud features, such as encryption or storage snapshots, to hold systems hostage. Pereira explained that many organisations in the region are adapting by shifting to identity-focused security models. “Cloud environments have become the new perimeter, and attackers have been weaponising cloud-native tools,” he said. “We now need to enforce strict cloud security hygiene, such as robust MFA, least privilege access, proactively monitoring of role access changes or credential leaks, using automation to detect and remediate misconfigurations, and anomaly detection tools for cloud activities.” He pointed to rising investments in identity and access management tools, with organisations recognising their role in cutting down the risk of identity-based attacks. For APAC businesses, this means moving away from legacy perimeter defences and embracing cloud-native safeguards that assume breaches are inevitable but limit the damage.


AI Won't Replace Developers, It Will Make the Best Ones Indispensable

The replacement theory assumes AI can work independently, but it can't. Today's AI coding tools don't run themselves, they need active steering. Most AI tools today operate on a "prompt and pray" model: give the AI instructions, get code back, hope it works. That's fine for demos or side projects, but production environments are far less forgiving. ... AI doesn't level the playing field between developers, it widens it. Using AI effectively requires the same skills that make great developers great: understanding system architecture, recognizing security implications, writing maintainable code. ... Tomorrow's junior developers will need to get productive in a different way. Instead of spending months learning basic syntax and patterns, they'll start by learning to collaborate with AI agents effectively. Those who can adapt will find opportunities, and those who can't might struggle to break in. This shift actually creates more demand for senior engineers, because someone needs to train these AI-assisted junior developers, architect systems that can handle AI-generated code at scale, and establish the processes and standards that keep AI tools from creating chaos. ... The teams succeeding with AI coding treat agents like exceptionally capable junior teammates who need oversight. They provide detailed context, review generated code, and test thoroughly before deployment rather than optimizing purely for speed.

Daily Tech Digest - September 27, 2024

What happens when everybody winds up wearing ‘AI body cams’?

The first body cams were primitive. They were enormous, had narrow, 68-degree fields of view, had only 16GB of internal storage, and had batteries that lasted only four hours. Body cams now usually have high-resolution sensors, GPS, infrared for low-light conditions, and fast charging. They can be automatically activated through Bluetooth sensors, weapon release, or sirens. They use backend management systems to store, analyze, and share video footage. The state of the art — and the future of the category — is multimodal AI. ... Using such a system in multimodal AI, a user could converse with their AI agent, asking questions about what the glasses were pointed at previously. These glasses will almost certainly have a dashcam-like feature where video is constantly recorded and deleted. Users can push a button to capture and store the past 30 seconds or 30 minutes of video and audio — basically creating an AI body cam worn on the face. Smart glasses will be superior to body cams, and over time, AI body cams for police and other professionals will no doubt be replaced by AI camera glasses. This raises the question: When everybody has AI body cams — specifically glasses with AI body cam functionality — nwhat does society then look like?


Aligning Cloud Costs With Sustainability and Business Goals

AI is poised for democratization, similar to the cloud. Users will have the choice and ability to use multiple models for numerous use cases. Future trends indicate a rise in culturally aware and industry-specific models that will further facilitate the democratization of AI. Singapore's National Research Foundation launched AI Singapore - a national program to enhance the country's AI capabilities - to make its LLMs more culturally accurate, localized and tailored to Southeast Asia. AWS is working with Singapore public organizations to develop innovative, industry-first solutions powered by AI and gen AI, including AI Singapore's SEA-LION. Building on AWS' scalable compute infrastructure, SEA-LION is a family of LLMs that is specifically pre-trained and instruct-tuned for Southeast Asian languages and cultures. WS released the Amazon Bedrock managed service to support gen AI deployments for large enterprises. It now provides easy access to multiple large language models and foundation models from AI21 Labs, Anthropic, Cohere, Meta and Stability AI through a single API, along with a broad set of capabilities organizations need to build gen AI applications with security, privacy and responsible AI.


Fortifying the Weakest Link: How to Safeguard Against Supply Chain Cyberattacks

Failures in systems and processes by third parties can lead to catastrophic reputational and operational damage. It is no longer sufficient to merely implement basic vendor management procedures. Organizations must also take proactive measures to safeguard against third-party control failures. ... Protect administrative access to the tools and applications used by DevOps teams. Enable secure application configuration via secrets and authenticate applications and services with high confidence. Mandate that software suppliers certify and extend security controls to cover microservices, cloud, and DevOps environments. ... Ensure that your systems and those of your suppliers are regularly updated and patched for known vulnerabilities. Prevent the use of unsupported or outdated software that could introduce new vulnerabilities. ... Configure cloud environments to reject authorization requests involving tokens that deviate from accepted norms. For on-premises systems, follow the National Security Agency’s guidelines by deploying a Federal Information Processing Standards (FIPS)-validated Hardware Security Module (HSM) to store token-signing certificate private keys. HSMs significantly reduce the risk of key theft by threat actors.


Are hardware supply chain attacks “cyber attacks?”

In the case of hardware supply chain attacks, malicious actors infiltrate the supply of devices, or the physical manufacturing process of pieces of hardware and purposefully build in security flaws, faulty parts, or backdoors they know they can take advantage of in the future, such as malicious microchips on a circuit board. For Cisco’s part, the Cisco Trustworthy technologies program, including secure boot, Cisco Trust Anchor module (TAm), and runtime defenses give customers the confidence that the product is genuinely from Cisco. As I was thinking about the threat of hardware supply chain attacks, I was left wondering who, exactly, should be tasked with solving this problem. And I think I’ve decided the onus falls on several different sectors. It shouldn’t just be viewed as a cybersecurity issue, because for a hardware supply chain attack, an adversary would likely need to physically infiltrate or tamper with the manufacturing process. Entering a manufacturing facility or other stops along the logistics chain would require some level of network-level manipulation, such as faking a card reader or finding a way to trick physical defenses — that’s why Cisco Talos Incident Response looks for these types of things in Purple Team exercises.


How The Digital Twin Helps Build Resilient Manufacturing Operations

The digital twin is a sophisticated tool. It must be a true working virtual replica of the physical asset. Anything short of that means problems. To make it all work, consider several key aspects. You will most likely need multiple digital twins of the same physical asset. At least one digital twin should be online most of the time, collecting data from the real world. Other copies of the digital twin might be offline at times, but they use the real-world data in various training situations and for optimizing the equipment and the line. Getting data from the real world into the digital twin is one of the best and most common uses for the Industrial Internet of Things (IIoT). The latest digital twins are incorporating AI to help optimize the design process, learn from previous designs and create new equipment designs. AI helps create operator training scenarios and optimizes the equipment and production line. AI learns from the optimization process and, even with new wrinkles thrown into the real world, learns how to optimize the optimization process. It helps troubleshoot the equipment, finding problems quickly, long before they become problems.


3 tips for securing IoT devices in a connected world

Comprehensive visibility refers to an organization’s ability to identify, monitor and remotely manage each individual device connected to its network. Gaining this level of visibility is a crucial first step for maintaining a robust security posture and preventing unauthorized access or potential breaches. ... Addressing common vulnerabilities like built-in backdoors and unpatched firmware is essential for maintaining the security of connected devices. Built-in backdoors are hidden or undocumented access points in a device’s software or firmware that allow unauthorized access to the device or its network. These backdoors are often left by manufacturers for maintenance or troubleshooting purposes but can be exploited by attackers if not properly secured. ... One important step in secure deployment is limiting access to critical resources using network segmentation. Network segmentation involves dividing a network into smaller, isolated segments or subnets, each with its own security controls. This practice limits the movement of threats across the network, reducing the risk of a compromised IoT device leading to a broader security breach. 


Why countries are in a race to build AI factories in the name of sovereign AI

“The number of sovereign AI clouds is really quite significant,” Huang said in the earnings call. He said Nvidia wants to enable every company to build its own custom AI models. The motivations weren’t just about keeping a country’s data in local tech infrastructure to protect it. Rather, they saw the need to invest in sovereign AI infrastructure to support economic growth and industrial innovation, said Colette Kress, CFO of Nvidia, in the earnings call. That was around the time when the Biden administration was restricting sales of the most powerful AI chips to China, requiring a license from the U.S. government before shipments could happen. That licensing requirement is still in effect. As a result, China reportedly began its own attempts to create AI chips to compete with Nvidia’s. But it wasn’t just China. Kress also said Nvidia was working with the Indian government and its large tech companies like Infosys, Reliance and Tata to boost their “sovereign AI infrastructure.” Meanwhile, French private cloud provider Scaleway was investing in regional AI clouds to fuel AI advances in Europe as part of a “new economic imperative,” Kress said. 


Is Spring AI Strong Enough for AI?

While the Spring framework itself does not have a dedicated AI library, it has proven to be an effective platform for developing AI-driven systems when combined with robust AI/ML frameworks. Spring Boot and Spring Cloud provide essential capabilities for deploying AI/ML models, managing REST APIs, and orchestrating microservices, all of which are crucial components for building and deploying production-ready AI systems. ... Spring, typically known as a versatile enterprise framework, showcases its effectiveness in high-quality AI deployments when combined with its robust scalability, security, and microservice architecture features. Its seamless integration with machine learning models, especially through REST APIs and cloud infrastructure, positions it as a formidable choice for enterprises seeking to integrate AI with intricate business systems. Nevertheless, for more specialized tasks such as model versioning, training orchestration, and rapid prototyping, AI-specific frameworks like TensorFlow Serving, Kubernetes, and MLflow offer tailored solutions that excel in high-performance model serving, distributed AI workflows, and streamlined management of the complete machine learning lifecycle with minimal manual effort.


Top Skills Chief AI Officers Must Have to Succeed in Modern Workplace

Domain knowledge is obviously vital. Possessing an understanding of core AI concepts is a must. Machine learning (ML), data analytics, and software development are elementary requirements a capable CAIO will leverage for specific business goals. Given the incipient stage that AI transformation is at, candidates will have to supplement their knowledge with continuous learning, adaptability, and initiative. Notably, a CAIO must use their expertise to arrive at data-driven decisions—it sets a good professional apart and highlights their capacity to troubleshoot accurately. ... A CAIO must translate AI concepts into clear strategies, prioritizing among multiple potential implementations based on their judgment of what will deliver the greatest value. This involves setting concrete goals such as improved efficiency, enhanced customer engagement, or increased employee productivity, and devising a roadmap to achieve them. ... Beyond the technical knowledge and strategic acumen, a powerful grasp of how business processes work within an organisation and why they function the way they do is crucial. CAIOs must foremost align with this culture and find ways to integrate AI within that framework.


5 Ways to Keep Global Development Teams Productive

A significant challenge for global development teams is ensuring smooth collaboration between different locations. Without the right tools and processes, team members can experience delays due to time zone differences, slow data access, or inconsistent version control systems. To improve collaboration, development teams should implement systems that provide fast, reliable access to codebases, regardless of location. Real-time collaboration tools that synchronize work across global teams are essential. For instance, platforms that replicate repositories in real-time across different sites ensure that all team members are working with the latest version of the code, reducing the risk of inconsistencies. ... Compliance with data protection laws, such as the GDPR or CCPA, is also essential for companies working across borders. Development teams need to be mindful of where data is stored and ensure that their tools meet the necessary compliance requirements. Security policies should be applied consistently across all locations to prevent breaches and data leaks, which can lead to significant financial and reputational damage.



Quote for the day:

“Without continual growth and progress, such words as improvement, achievement, and success have no meaning.” -- Benjamin Franklin