Showing posts with label AI Policy. Show all posts
Showing posts with label AI Policy. Show all posts

Daily Tech Digest - October 26, 2025


Quote for the day:

"Everywhere is within walking distance if you have the time." -- Steven Wright


AI policy without proof is just politics

History shows us that regulation without verification rarely works. Imagine if Wall Street firms were allowed to audit their own books, or if pharmaceutical companies could approve their own drugs. The risks would be obvious and unacceptable. Yet, in AI today, much of the information policymakers see about model performance and safety comes straight from the companies developing those systems, leaving regulators dependent on the very firms they are meant to oversee. Self-reporting, intentionally or not, creates structural blind spots. Developers have incentives to highlight strengths and minimize weaknesses, and even honest disclosures can leave out important context. ... The first requirement is independence. Oversight must be based on information that does not come solely from the companies themselves: data that can be inspected, verified and trusted as neutral. Without that independence, even well-intentioned disclosures risk being selective or incomplete. The second requirement is continuity. AI systems evolve quickly, and their performance often shifts once they are deployed in the wild. Benchmarks conducted at launch can’t capture how models change over time, or how they behave across different languages, domains and user needs.  ... AI policy is at a crossroads. The U.S. has set bold goals, but without reliable evaluation, those goals risk becoming little more than rhetoric. Rules set the direction. Proof provides the trust.


5 ways ambitious IT pros can future-proof their tech careers in an age of AI

Successful IT chiefs are expected to be the expert resources for pioneering technology developments. In fact, Daly said the CIOs of the future will demonstrate how AI can fulfill some executive roles and responsibilities. ... David Walmsley, chief digital and technology officer at jewelry specialist Pandora, said up-and-coming IT stars take responsibilities and opportunities. The disconnected technology organization of old, which relied on outsourcing arrangements for project delivery, has been replaced by a department that works closely with the business to achieve its objectives. "The days of technology leaders leaning back and saying, 'Well, which of my external providers do I blame now?' are long gone," he said. "Everyone can see that technology is a strategic lever for growing the business and helping it succeed in its mission." ... The critical skill for next-generation leaders lies not in chasing every new platform or coding language, but in cultivating the human capacities that allow you to adapt. "Those capabilities include curiosity, critical thinking, collaboration, and an understanding of human behavior," he said. "At LIS, we emphasize interdisciplinary learning precisely because technology never exists in isolation; it is always entangled with psychology, economics, ethics, and culture."


Biometrics increase integrity from age checks to agents, but not when compelled

Biometrics are anchoring trust for established but growing use cases like national IDs even as new use cases are taking off. But surveillance concerns inevitably come with increases in the collection of personal data, particularly when the collection is compelled or involuntary. ... Tech industry group the CCIA took aim at Texas’ app store level age checks, arguing the plan is bound to fail in several ways, including data privacy breaches. One of those alleged likely failures is the accuracy of facial age estimation, but the supporting stat from NIST is outdated, and the new figure significantly better. Automated license-place reader-maker Flock and Amazon’s Ring have partnered to share data, allowing law enforcement agencies that use Flock’s investigative platforms to request footage from homeowners. ... The growth of online interactions with credentials that are anchored with biometrics continues unabated, in the form of national ID systems, agentic AI, age checks and identity verification. Juniper Research forecasts digital identity will be an $80 billion global market by 2030, with growth driven by new regulations and credentials. ... Age checks could catalyze digital ID adoption Luciditi CPO Dan Johnson says on the Biometric Update Podcast. He makes the case for the advantages of adding age assurance to apps by integrating a component, rather than building a standalone branded app.


Weak Data Infrastructure Keeps Most GenAI Projects From Delivering ROI

Kolbeck sees companies investing billions while overlooking adequate storage to support their AI infrastructure as one of the major mistakes corporations make. He said that oversight leads to three key failure factors — festering silos, lack of performance, and uptime dilemmas. The most critical resource for AI is data training. When companies store data across multiple silos, data scientists lack access to essential details. “Storage systems must be able to scale and provide unified access to enable an AI data lake, a centralized and efficient storage for the entire company,” he observed. ... “Early AI projects may work well, but as soon as these projects grow in size [as in more GPUs], these arrays tip over, and that’s when mission-critical workflows grind to a halt,” he said. Kolbeck explained the difference between scale-out architecture versus a scale-up approach as a better option for handling the massive and unpredictable data demands of modern AI and ML. He cited his company’s experience in making that transition. ... “Developing and training AI technology is still a very experimental process and requires the infrastructure — including storage — to adapt quickly when data scientists develop new ideas,” Kolbeck noted. Real-time performance analytics are critical. Storage administrators need to be able to precisely identify how applications, such as training or other pipeline phases, impact the storage. 


When your AI browser becomes your enemy: The Comet security disaster

Your regular Chrome or Firefox browser is basically a bouncer at a club. It shows you what's on the webpage, maybe runs some animations, but it doesn't really "understand" what it's reading. If a malicious website wants to mess with you, it has to work pretty hard — exploit some technical bug, trick you into downloading something nasty or convince you to hand over your password. AI browsers like Comet threw that bouncer out and hired an eager intern instead. This intern doesn't just look at web pages — it reads them, understands them and acts on what it reads. Sounds great, right? Except this intern can't tell when someone's giving them fake orders. ... They can actually do stuff: Regular browsers mostly just show you things. AI browsers can click buttons, fill out forms, switch between your tabs, even jump between different websites. ... They remember everything: Unlike regular browsers that forget each page when you leave, AI browsers keep track of everything you've done across your whole session. ... You trust them too much: We naturally assume our AI assistants are looking out for us. That blind trust means we're less likely to notice when something's wrong. Hackers get more time to do their dirty work because we're not watching our AI assistant as carefully as we should. They break the rules on purpose: Normal web security works by keeping websites in their own little boxes — Facebook can't mess with your Gmail, Amazon can't see your bank account. 


Rewriting the Rules of Software Quality: Why Agentic QA is the Future CIOs Must Champion

From continuous deployment to AI-powered applications, software systems are more dynamic, distributed and adaptive than ever. In this changing environment, static testing frameworks are crumbling. What worked yesterday is increasingly not going to work today, and tomorrow’s risks cannot be addressed using yesterday’s checklists. This is where agentic QA steps in, heralding a transformative approach that integrates autonomous, intelligent agents throughout the entire software lifecycle. ... What distinguishes this model isn’t just its intelligence — it’s its adaptability. In a world where AI models are themselves part of the application stack, QA must account for nondeterminism. Agentic systems are uniquely equipped to do this. When AI-driven components exhibit variable behavior based on internal learning states, traditional test-case comparisons fail for evident reasons. Agentic QA, on the other hand, thrives in uncertainty. It detects anomalies, learns from edge cases, and refines its approach continuously. ... However, it is essential to note that as AI takes over repetitive and complex validations, it enables QA professionals to step up and evolve into curators of quality. Their role is freed up to become more strategic: Defining testing intent, ensuring AI alignment with business goals, interpreting nuanced behaviors, and upholding ethical standards. This shift calls for a cultural transformation.


AI-Powered Ransomware Is the Emerging Threat That Could Bring Down Your Organization

AI fundamentally transforms every phase of ransomware operations through several key capabilities. Enhanced reconnaissance allows malware to autonomously scan security perimeters, identify vulnerabilities, and select precise exploitation tools. This eliminates the need for human operators during initial phases, enabling attacks to spread rapidly across IT environments. Adaptive encryption techniques represent another revolutionary advancement. AI-powered ransomware can analyze system resources and data types to modify encryption algorithms dynamically, making decryption more complex. The malware can prioritize high-value targets by analyzing document content using Natural Language Processing before encryption, ensuring maximum strategic impact. Evasive tactics powered by machine learning enable ransomware to continuously modify its code and behavior patterns. This polymorphic capability makes signature-based detection methods ineffective, as the malware presents different fingerprints with each execution. AI also enables malware to track user presence and activate during off-hours to maximize damage while minimizing detection opportunities. The financial consequences of AI-powered ransomware attacks far exceed traditional threats. ... Small businesses face particularly severe consequences, with 60% of attacked companies closing permanently within six months.


When a Provider's Lights Go Out, How Can CIOs Keep Operations Going?

This may seem obvious, but a thousand companies still lost digital functionality on Monday. Why weren't they better prepared? One answer is that while redundancy isn't new, it also isn't very sexy. In a field full of innovation and growth, redundancy is about slowing down, checking your work, and taking the safest route. It's not surprising if some companies are more excited about investing in new AI capabilities than implementing failsafe protocols. Nor is it necessarily wrong. ... "It is important to invest where failure creates real risk, not just minor inconvenience, or noise," he added. This will look different for companies of different sizes, but particularly for companies within different sectors. Some industries, such as healthcare or finance, require a higher level of redundancy across the board simply because the stakes are greater; lack of access to patient records or financial information could have severe repercussions in terms of safety and public trust, which are far beyond inconvenience or frustration. ... But this isn't as simple as tracing third-party contracts, counting how often one name appears, and shifting some operations away from too-dominant providers. If an organization has partnered predominantly with one provider, it's probably for good reason. As Hitchens explained, working with a single provider can accelerate innovation and simplify management, offering visibility, native integrations and unified tooling.


Three Ways Secure Modern Networks Unlock the True Power of AI

AI is network-bound. As always-on models demand up to 100 times more compute, storage, and bandwidth, traditional networks risk becoming bottlenecks both on capacity, and latency. For AI tasks that happen instantly, like self-driving cars or automated stock trading, even tiny delays can cause problems. Modern network infrastructure needs to be more than just fast. It also needs to be safe from cyberattacks and strong enough to handle more AI growth in the future. To realize AI’s full potential, businesses must build purpose-built “AI superhighways”, secure networks designed to scale seamlessly, handling distributed AI workloads across core, cloud, and edge environments. ... The value organizations expect from AI, be it automating workflows, unlocking predictive insights, or powering new digital experiences, depends on more than just compute power or clever algorithms. Furthermore, the demand for real-time machine data from business operations to train AI models is increasing the need for more detailed and extensive networks. This, in turn, accelerates the integration of IT and OT, and expands the adoption of the Internet of Things (IoT) ... The sensitivity of AI data flows is raising the bar for security and compliance. The risks of sticking with outdated infrastructure are stark. 95% of technology leaders say a resilient network is critical to their operations, and 77% have experienced major outages due to congestion, cyberattacks, or misconfigurations.


"It’s not about security, it’s about control" – How EU governments want to encrypt their own comms, but break our private chats

In the wake of ever-larger and frequent cyberattacks – think of the Salt Typhoon in the US – encryption has become crucial to shield everyone's security, whether that's ID theft, scams, or national security risks. Even the FBI urged all Americans to turn to encrypted chats. ... Law enforcement, however, often sees this layer of protection as an obstacle to their investigations, pushing for "lawful access" to encrypted data as a way to combat hideous crimes like terrorism or child abuse. That's exactly where legislation proposals like Chat Control and ProtectEU in the European bloc, or the Online Safety Act in the UK, come from. Yet, people working with encryption know that these solutions are flawed. On a technical level, experts all agree that an encryption backdoor cannot guarantee the same level of online security and privacy we have now. Is then time to redefine what we mean when we talk about privacy? This is what's probably needed, according to Rocket.Chat's Strategic Advisor, Christian Calcagni. "We need to have a new definition of private communication, and that's a big debate. Encryption or no encryption, what could be the way?" Calcagni is, nonetheless, very critical of the current push to break encryption. He told me: "Why should the government know what I think or what I'm sharing on a personal level? We shouldn't focus only on encryption or not encryption, but on what that means for our privacy, our intimacy."

Daily Tech Digest - July 05, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


The Hidden Data Cost: Why Developer Soft Skills Matter More Than You Think

The logic is simple but under-discussed: developers who struggle to communicate with product owners, translate goals into architecture, or anticipate system-wide tradeoffs are more likely to build the wrong thing, need more rework, or get stuck in cycles of iteration that waste time and resources. These are not theoretical risks, they’re quantifiable cost drivers. According to Lumenalta’s findings, organizations that invest in well-rounded senior developers, including soft skill development, see fewer errors, faster time to delivery, and stronger alignment between technical execution and business value. ... The irony? Most organizations already have technically proficient talent in-house. What they lack is the environment to develop those skills that drive high-impact outcomes. Senior developers who think like “chess masters”—a term Lumenalta uses for those who anticipate several moves ahead—can drastically reduce a project’s TCO by mentoring junior talent, catching architecture risks early, and building systems that adapt rather than break under pressure. ... As AI reshapes every layer of tech, developers who can bridge business goals and algorithmic capabilities will become increasingly valuable. It’s not just about knowing how to fine-tune a model, it’s about knowing when not to.


Why AV is an overlooked cybersecurity risk

As cyber attackers become more sophisticated, they’re shifting their attention to overlooked entry points like AV infrastructure. A good example is YouTuber Jim Browning’s infiltration of a scam call center, where he used unsecured CCTV systems to monitor and expose criminals in real time. This highlights the potential for AV vulnerabilities to be exploited for intelligence gathering. To counter these risks, organizations must adopt a more proactive approach. Simulated social engineering and phishing attacks can help assess user awareness and expose vulnerabilities in behavior. These simulations should be backed by ongoing training that equips staff to recognize manipulation tactics and understand the value of security hygiene. ... To mitigate the risks posed by vulnerable AV systems, organizations should take a proactive and layered approach to security. This includes regularly updating device firmware and underlying software packages, which are often left outdated even when new versions are available. Strong password policies should be enforced, particularly on devices running webservers, with security practices aligned to standards like the OWASP Top 10. Physical access to AV infrastructure must also be tightly controlled to prevent unauthorized LAN connections. 


EU Presses for Quantum-Safe Encryption by 2030 as Risks Grow

The push comes amid growing concern about the long-term viability of conventional encryption techniques. Current security protocols rely on complex mathematical problems — such as factoring large numbers — that would take today’s classical computers thousands of years to solve. But quantum computers could potentially crack these systems in a fraction of the time, opening the door to what cybersecurity experts refer to as “store now, decrypt later” attacks. In these attacks, hackers collect encrypted data today with the intention of breaking the encryption once quantum technology matures. Germany’s Federal Office for Information Security (BSI) estimates that conventional encryption could remain secure for another 10 to 20 years in the absence of sudden breakthroughs, The Munich Eye reports. Europol has echoed that forecast, suggesting a 15-year window before current systems might be compromised. While the timeline is uncertain, European authorities agree that proactive planning is essential. PQC is designed to resist attacks from both classical and quantum computers by using algorithms based on different kinds of hard mathematical problems. These newer algorithms are more complex and require different computational strategies than those used in today’s standards like RSA and ECC. 


MongoDB Doubles Down on India's Database Boom

Chawla says MongoDB is helping Indian enterprises move beyond legacy systems through two distinct approaches. "The first one is when customers decide to build a completely new modern application, gradually sunsetting the old legacy application," he explains. "We work closely with them to build these modern systems." ... Despite this fast-paced growth, Chawla points out several lingering myths in India. "A lot of customers still haven't realised that if you want to build a modern application especially one that's AI-driven you can't build it on a relational structure," he explains. "Most of the data today is unstructured and messy. So you need a database that can scale, can handle different types of data, and support modern workloads." ... Even those trying to move away from traditional databases often fall into the trap of viewing PostgreSQL as a modern alternative. "PostgreSQL is still relational in nature. It has the same row-and-column limitations and scalability issues." He also adds that if companies want to build a future-proof application especially one that infuses AI capabilities they need something that can handle all data types and offers native support for features like full-text search, hybrid search, and vector search. Other NoSQL players such as Redis and Apache Cassandra also have significant traction in India.


AI only works if the infrastructure is right

The successful implementation of artificial intelligence is therefore closely linked to the underlying infrastructure. But how you define that AI infrastructure is open to debate. An AI infrastructure always consists of different components, which is clearly reflected in the diverse backgrounds of the participating parties. As a customer, how can you best assess such an AI infrastructure? ... For companies looking to get started with AI infrastructure, a phased approach is crucial. Start small with a pilot, clearly define what you want to achieve, and expand step by step. The infrastructure must grow with the ambitions, not the other way around. A practical approach must be based on the objectives. Then the software, middleware, and hardware will be available. For virtually every use case, you can choose from the necessary and desired components. ... At the same time, the AI landscape requires a high degree of flexibility. Technological developments are rapid, models change, and business requirements can shift from quarter to quarter. It is therefore essential to establish an infrastructure that is not only scalable but also adaptable to new insights or shifting objectives. Consider the possibility of dynamically scaling computing capacity up or down, compressing models where necessary, and deploying tooling that adapts to the requirements of the use case. 


Software abstraction: The missing link in commercially viable quantum computing

Quantum Infrastructure Software delivers this essential abstraction, turning bare-metal QPUs into useful devices, much the way data center providers integrate virtualization software for their conventional systems. Current offerings cover all of the functions typically associated with the classical BIOS up through virtual machine Hypervisors, extending to developer tools at the application level. Software-driven abstraction of quantum complexity away from the end users lets anyone, irrespective of their quantum expertise, leverage quantum computing for the problems that matter most to them. ... With a finely tuned quantum computer accessible, a user must still execute many tasks to extract useful answers from the QPU, in analogy with the need for careful memory management required to gain practical acceleration with GPUs. Most importantly, in executing a real workload, they must convert high-level “assembly-language” logical definitions of quantum applications into hardware-specific “machine-language” instructions that account for the details of the QPU in use, and deploy countermeasures where errors might leak in. These are typically tasks that can only be handled by (expensive!) specialists in quantum-device operation.


Guest Post: Why AI Regulation Won’t Work for Quantum

Artificial intelligence regulation has been in the regulatory spotlight for the past seven to ten years and there is no shortage of governments and global institutions, as well as corporations and think tanks, putting forth regulatory frameworks in response to this widely buzzy tech. AI makes decisions in a “black box,” creating a need for “explainability” in order to fully understand how determinations by these systems affect the public. With the democratization of AI systems, there is the potential for bad actors to create harm in a decentralized ecosystem. ... Because quantum systems do not learn on their own, evolve over time, or make decisions based on training data, they do not pose the same kind of existential or social threats that AI does. Whereas the implications of quantum breakthroughs will no doubt be profound, especially in cryptography, defense, drug development, and material science, the core risks are tied to who controls the technology and for what purpose. Regulating who controls technology and ensuring bad actors are disincentivized from using technology in harmful ways is the stuff of traditional regulation across many sectors, so regulating quantum should prove somewhat less challenging than current AI regulatory debates would suggest.


Validation is an Increasingly Critical Element of Cloud Security

Security engineers simply don’t have the time or resources to familiarize themselves with the vast number of cloud services available today. In the past, security engineers primarily needed to understand Windows and Linux internals, Active Directory (AD) domain basics, networks and some databases and storage solutions. Today, they need to be familiar with hundreds of cloud services, from virtual machines (VMs) to serverless functions and containers at different levels of abstraction. ... It’s also important to note that cloud environments are particularly susceptible to misconfigurations. Security teams often primarily focus on assessing the performance of their preventative security controls, searching for weaknesses in their ability to detect attack activity. But this overlooks the danger posed by misconfigurations, which are not caused by bad code, software bugs, or malicious activity. That means they don’t fall within the definition of “vulnerabilities” that organizations typically test for—but they still pose a significant danger.  ... Securing the cloud isn’t just about having the right solutions in place — it’s about determining whether they are functioning correctly. But it’s also about making sure attackers don’t have other, less obvious ways into your network.


Build and Deploy Scalable Technical Architecture a Bit Easier

A critical challenge when transforming proof-of-concept systems into production-ready architecture is balancing rapid development with future scalability. At one organization, I inherited a monolithic Python application that was initially built as a lead distribution system. The prototype performed adequately in controlled environments but struggled when processing real-world address data, which, by their nature, contain inconsistencies and edge cases. ... Database performance often becomes the primary bottleneck in scaling systems. Domain-Driven Design (DDD) has proven particularly valuable for creating loosely coupled microservices, with its strategic phase ensuring that the design architecture properly encapsulates business capabilities, and the tactical phase allowing the creation of domain models using effective design patterns. ... For systems with data retention policies, table partitioning proved particularly effective, turning one table into several while maintaining the appearance of a single table to the application. This allowed us to implement retention simply by dropping entire partition tables rather than performing targeted deletions, which prevented database bloat. These optimizations reduced average query times from seconds to milliseconds, enabling support for much higher user loads on the same infrastructure.


What AI Policy Can Learn From Cyber: Design for Threats, Not in Spite of Them

The narrative that constraints kill innovation is both lazy and false. In cybersecurity, we’ve seen the opposite. Federal mandates like the Federal Information Security Modernization Act (FISMA), which forced agencies to map their systems, rate data risks, and monitor security continuously, and state-level laws like California’s data breach notification statute created the pressure and incentives that moved security from afterthought to design priority.  ... The irony is that the people who build AI, like their cybersecurity peers, are more than capable of innovating within meaningful boundaries. We’ve both worked alongside engineers and product leaders in government and industry who rise to meet constraints as creative challenges. They want clear rules, not endless ambiguity. They want the chance to build secure, equitable, high-performing systems — not just fast ones. The real risk isn’t that smart policy will stifle the next breakthrough. The real risk is that our failure to govern in real time will lock in systems that are flawed by design and unfit for purpose. Cybersecurity found its footing by designing for uncertainty and codifying best practices into adaptable standards. AI can do the same if we stop pretending that the absence of rules is a virtue.

Daily Tech Digest - April 08, 2025


Quote for the day:

"Individual commitment to a group effort - that is what makes a team work, a company work, a society work, a civilization work." -- Vince Lombardi



AI demands more software developers, not less

Entry-level software development will change in the face of AI, but it won’t go away. As LLMs increasingly handle routine coding tasks, the traditional responsibilities of entry-level developers—such as writing boilerplate code—are diminishing. Instead their roles will evolve into AI supervisors; they’ll test outputs, manage data labeling, and integrate code into broader systems. This necessitates a deeper understanding of software architecture, business logic, and user needs. Doing this effectively requires a certain level of experience and, barring that, mentorship. The dynamic between junior and senior engineers is shifting. Seniors need to mentor junior developers in AI tool usage and code evaluation. Collaborative practices such as AI-assisted pair programming will also offer learning opportunities. Teams are increasingly co-creating with AI; this requires clear communication and shared responsibilities across experience levels. Such mentorship is essential to prevent more junior engineers from depending too heavily on AI, which results in shallow learning and a downward spiral of productivity loss. Across all skill levels, companies are scrambling to upskill developers in AI and machine learning. A late-2023 survey in the United States and United Kingdom showed 56% of organizations listed prowess in AI/ML as their top hiring priority for the coming year. 


Ask a CIO Recruiter: How AI is Shaping the Modern CIO Role

Everything right now revolves around AI, but you still as CIO have to have that grounding in all of the traditional disciplines of IT. Whether that is systems, whether that’s infrastructure, whether that’s cybersecurity, you have to have that well-rounded background. Even as these AI technologies become more prolific, you must consider your past infrastructure spend, your cloud spend, that went into these technologies. How do you manage that? If you don’t have grounding in managing those costs, and being able to balance those costs with the innovation you are trying to create, that’s a recipe for failure on the cyber side. ... When we’re looking for skill sets, we’re looking for people who have actually taken those AI technologies and applied them within their organizations to create real business value -- whether that is cost savings or top-line revenue creation, whatever those are. It’s hard to find those candidates, because there are a lot of those people who can talk the talk around AI, but when you really drill down there is not much in terms of results to show. It’s new, especially in applying the technology to certain settings. Take manufacturing: there’s not that many CIOs out there who have great examples of applying AI to create value within organizations. It’s certainly accelerating, and you’re going to see it accelerating more as we go into the future. It’s just so new that those examples are few and far between.


Architectural Experimentation in Practice: Frequently Asked Questions

When the cost of reversing a decision is low or trivial, experimentation does not reduce cost very much and may actually increase cost. Prior experience with certain kinds of decisions usually guides the choice; if team members have worked on similar systems or technical challenges, they will have an understanding of how easily a decision can be reversed. ... Experiments are more than just playing around with technology. There is a place for playing with new ideas and technologies in an unstructured, exploratory way, and people often say that they are "experimenting" when they are doing this. When we talk about experimentation, we mean a process that involves forming a hypothesis and then building something that tests this hypothesis, either accepting or rejecting it. We prefer to call the other approach "unstructured exploratory learning", a category that includes hackathons, "10% Time", and other professional development opportunities. ... Experiments should have a clear duration and purpose. When you find an experiment that’s not yielding results in the desired timeframe, it’s time to stop it and design something else to test your hypothesis that will yield more conclusive results. The "failed" experiment can still yield useful information, as it may indicate that the hypothesis is difficult to prove or may influence subsequent, more clearly defined experiments.


Optimizing IT with Open Source: A Guide to Asset Management Solutions

Orchestration frameworks are crucial for developing sophisticated AI applications that can perform tasks beyond simply answering a single question. While a single LLM is proficient in understanding and generating text, many real-world AI applications require performing a series of steps involving different components. Orchestration frameworks provide the structure necessary to design and manage these complex workflows, ensuring that all the various components of the AI system work together efficiently. ... One way orchestration frameworks enhance the power of LLMs is through a technique known as “prompt chaining.” Think of it as telling a story one step at a time. Instead of giving the LLM a single, lengthy instruction, you provide it with a series of more minor, interconnected instructions known as prompts. The response from one prompt then becomes the starting point for the following prompt, guiding the LLM through a more complex thought process. Open-source orchestration frameworks make it much simpler to create and manage these chains of prompts. They often provide tools that allow developers to easily link prompts together, sometimes through visual interfaces or programming tools. Prompt chaining can be helpful in many situations. 


Reframing DevSecOps: Software Security to Software Safety

A templatized, repeatable, process-led approach, driven by collaboration between platform and security teams, leads to a fundamental shift in the way teams think about their objectives. They move from the concept of security, which promises a state free from danger or threat, to safety, which focuses on creating systems that are protected from and unlikely to create danger. This shift emphasizes proactive risk mitigation through thoughtful, reusable design patterns and implementation rather than reactive threat mitigation. ... The outcomes between security products and product security are vastly different with the latter producing far greater value. Instead of continuing to shift responsibilities, development teams should embrace the platform security engineering paradigm. By building security directly into shared processes and operations, development teams can scale up to meet their needs today and in the future. Only after these strong foundations have been established should teams layer in routinely run security tools for assurance and problem identification. This approach, combined with aligned incentives and genuine collaboration between teams, creates a more sustainable path to secure software development that works at scale.


10 things you should include in your AI policy

A carefully thought AI use policy can help a company set criteria for risk and safety, protect customers, employees, and the general public, and help the company zero in on the most promising AI use cases. “Not embracing AI in a responsible manner is actually reducing your advantage of being competitive in the marketplace,” says Bhrugu Pange, managing director who leads the technology services group at AArete, a management consulting firm. ... An AI policy needs to start with the organization’s core values around ethics, innovation, and risk. “Don’t just write a policy to write a policy to meet a compliance checkmark,” says Avani Desai, CEO at Schellman, a cybersecurity firm that works with companies on assessing their AI policies and infrastructure. “Build a governance framework that’s resilient, ethical, trustworthy, and safe for everyone — not just so you have something that nobody looks at.” Starting with core values will help with the creation of the rest of the AI policy. “You want to establish clear guidelines,” Desai says. “You want everyone from top down to agree that AI has to be used responsibly and has to align with business ethics.” ... Taking a risk-based approach to AI is a good strategy, says Rohan Sen, data risk and privacy principal at PwC. “You don’t want to overly restrict the low-risk stuff,” he says. 


FedRAMP's Automation Goal Brings Major Promises - and Risks

FedRAMP practitioners, federal cloud security specialists and cybersecurity professionals who spoke to Information Security Media Group welcomed the push to automate security assessments and streamline approvals. They warned that without clear details on execution, the changes risk creating new uncertainties in the process and disrupt companies midway through the exiting process. Program officials said they will establish a series of community working groups to serve as a platform for industry and the public to engage directly with FedRAMP experts and collaborate on solutions that meet its standards and policies. "This is both exciting and scary," said John Allison, senior director of federal advisory services for the federal cybersecurity solutions provider, Optiv + ClearShark. "As someone who works with clients on their FedRAMP strategy, this is going to open new options for companies - but I can see a lot of uncertainty weighing heavily on corporate leadership until more details are available." Automation may help reduce costs and timelines, he said, but companies mid-process could face disruption and agencies will shoulder more responsibility until new tools are in place. Allison said GSA could further streamline FedRAMP by allowing cloud providers to submit materials directly and pursue authorization without an agency sponsor.


Is hyperscaler lock-in threatening your future growth?

Infrastructure flexibility has increasingly become a competitive differentiator. Enterprises that maintain the ability to deploy workloads across multiple environments—whether hyperscaler, private cloud, or specialized provider—gain strategic advantages that extend beyond operational efficiency. This cloud portability empowers organizations to select the optimal infrastructure for each application and workload based on their specific requirements rather than provider limitations. When a new service emerges that delivers substantial business value, companies with diversified infrastructure can adopt it without dismantling their existing technology stack. Central to maintaining this flexibility is the strategic adoption of open source technologies. Enterprise-grade open source solutions provide the consistency and portability that proprietary alternatives cannot match. By standardizing on technologies like Kubernetes for container orchestration, PostgreSQL for database services, or Apache Kafka for event streaming, organizations create a foundation that works consistently across any infrastructure environment. The most resilient enterprises approach their technology stack like a portfolio manager approaches investments—diversifying strategically to maximize returns while minimizing exposure to any single point of failure.


7 risk management rules every CIO should follow

The most critical risk management rule for any CIO is maintaining a comprehensive, continuously updated inventory of the organization’s entire application portfolio, proactively identifying and mitigating security risks before they can materialize, advises Howard Grimes, CEO of the Cybersecurity Manufacturing Innovation Institute, a network of US research institutes focusing on developing manufacturing technologies through public-private partnerships. That may sound straightforward, but many CIOs fall short of this fundamental discipline, Grimes observes. ... Cybersecurity is now a multi-front war, Selby says. “We no longer have the luxury of anticipating the attacks coming at us head-on.” Leaders must acknowledge the interdependence of a robust risk management plan: Each tier of the plan plays a vital role. “It’s not merely a cyber liability policy that does the heavy lifting or even top-notch employee training that makes up your armor — it’s everything.” The No. 1 way to minimize risk is to start from the top down, Selby advises. “There’s no need to decrease cyber liability coverage or slack on a response plan,” he says. Cybersecurity must be an all-hands-on-deck endeavor. “Every team member plays a vital role in protecting the company’s digital assets.” 


Shift-Right Testing: Smart Automation Through AI and Observability

Shift-right testing goes beyond the conventional approach of performing pre-release testing, thereby enabling the development teams to deploy the software in real-time conditions. This approach includes canary releases where new features are released to a subset of users before the full launch. It also involves A/B testing, where two versions of the application are compared in real time. Another important feature is chaos engineering, which implies that failures are deliberately introduced to check the strength of the system. ... Chaos engineering is the practice of injecting controlled failures into the system to assess its robustness with the help of tools like Chaos Monkey and Gremlin. This helps validate the actual behavior of a system in a production-like environment. All the testing feedback loops are also automated to ensure that Shift-Right is applied consistently by using AI-powered test analytics tools like Testim and Applitools to learn from test case selection. This makes it possible to use production data to inform the automatic generation of test suites, thus increasing coverage and precision. Real-time alerting and self-healing mechanisms also enhance shift-right testing. Observability tools can be set up to send out alerts whenever a test fails and auto-remediation scripts to enable the environment to repair itself when test environments fail without the need to involve the IT staff.

Daily Tech Digest - December 01, 2024

Why microservices might be finished as monoliths return with a vengeance

Migrating to a microservice architecture has been known to cause complex interactions between services, circular calls, data integrity issues and, to be honest, it is almost impossible to get rid of the monolith completely. Let’s discuss why some of these issues occur once migrated to the microservices architecture. ... When moving to a microservices architecture, each client needs to be updated to work with the new service APIs. However, because clients are so tied to the monolith’s business logic, this requires refactoring their logic during the migration. Untangling these dependencies without breaking existing functionality takes time. Some client updates are often delayed due to the work’s complexity, leaving some clients still using the monolith database after migration. To avoid this, engineers may create new data models in a new service but keep existing models in the monolith. When models are deeply linked, this leads to data and functions split between services, causing multiple inter-service calls and data integrity issues. ... Data migration is one of the most complex and risky elements of moving to microservices. It is essential to accurately and completely transfer all relevant data to the new microservices. 


InputSnatch – A Side-Channel Attack Allow Attackers Steal The Input Data From LLM Models

Researchers found that both prefix caching and semantic caching, which are used by many major LLM providers, can leak information about what users type in without them meaning to. Attackers can potentially reconstruct private user queries with alarming accuracy by measuring the response time. The lead researcher said, “Our work shows the security holes that come with improving performance. This shows how important it is to put privacy and security first along with improving LLM inference.” “We propose a novel timing-based side-channel attack to execute input theft in LLMs inference. The cache-based attack faces the challenge of constructing candidate inputs in a large search space to hit and steal cached user queries. To address these challenges, we propose two primary components.” “The input constructor uses machine learning and LLM-based methods to learn how words are related to each other, and it also has optimized search mechanisms for generalized input construction.” ... The research team emphasizes the need for LLM service providers and developers to reassess their caching strategies. They suggest implementing robust privacy-preserving techniques to mitigate the risks associated with timing-based side-channel attacks.


Ransomware Gangs Seek Pen Testers to Boost Quality

As cybercriminal groups grow, specialization is a necessity. In fact, as cybercriminal gangs grow, their business structures increasingly resemble a corporation, with full-time staff, software development groups, and finance teams. By creating more structure around roles, cybercriminals can boost economies of scale and increase profits. ... some groups required specialization in roles based on geographical need — one of the earliest forms of contract work for cybercriminals is for those who can physically move cash, a way to break the paper trail. "Of course, there's recruitment for roles across the entire attack life cycle," Maor says. "When you're talking about financial fraud, mule recruitment ... has always been a key part of the business, and of course, development of the software, of malware, and end of services." Cybercriminals' concerns over software security boil down to self-preservation. In the first half of 2024, law enforcement agencies in the US, Australia, and the UK — among other nations — arrested prominent members of several groups, including the ALPHV/BlackCat ransomware group and seized control of BreachForums. The FBI was able to offer a decryption tool for victims of the BlackCat group — another reason why ransomware groups want to shore up their security.


Forget All-Cloud or All-On-Prem: Embrace Hybrid for Agility and Cost Savings

Hybrid isn’t just about cutting costs — it boosts speed, security, and performance. Agile applications run faster in the cloud, where teams can quickly spin up, test, and launch without the limits of on-prem systems. This agility becomes especially valuable when delivering software quickly to meet market demands without compromising the core stability of the entire system. Security and compliance are also critical drivers of hybrid adoption. Regulatory mandates often require data to remain on-premises to ensure compliance with local data residency laws. Hybrid infrastructure allows companies to move customer-facing applications to the cloud while keeping sensitive data on-prem. This separation of data from the front-end layers has become common in sectors like finance and government, where compliance demands and data security are non-negotiable. I have been speaking regularly to the CTOs of two very large banks in the US. They currently manage 15-20% of their workloads in the cloud and estimate the most they will ever have in the cloud would be 40-50%. They tell me the rest will stay on-prem — always — so they will always need to manage a hybrid environment.


Minimizing Attack Surface in the Cloud Environment

The increased dependence and popularity of the cloud environment expands the attack surface. These are the potential entry points, including network devices, applications, and services that attackers can exploit to infiltrate the cloud and access systems and sensitive data. ... Cloud services rely upon APIs for seamless integration with third-party applications or services. As the number of APIs increases, they expand the attack surface for attackers to exploit. Hackers can easily target insecure or poorly designed APIs that lack encryption or robust authentication mechanisms and access data resources, leading to data leaks and account takeover. ... The device or application not approved or supported by the IT team is called shadow IT. Since many of these devices and apps do not undergo the same security controls as the corporate ones, they become more vulnerable to hacking, putting the data stored within them at risk of manipulation. ... Unaddressed security gaps or errors threaten the cloud assets and data. Attackers can exploit misconfiguration and vulnerabilities in the cloud-hosted services, resulting in data breaches and other cyber attacks.


AI & structured cabling: Are they such unusual bedfellows?

The key word here is “structured” (its synonyms include organized, precise and efficient). When “structured” precedes the word “cabling,” it immediately points to a standardized way to design and install a cabling system that will be compliant to international standards, whilst providing a flexible and future-ready approach capable of supporting multiple generations of AI hardware. Typically, an AI data center’s structured cabling will be used to connect pieces of IT hardware together using high-performance, ultra-low loss optical fiber and Cat6A copper. ... What do we know about AI? Network speeds are constantly changing, and it feels like it’s happening on a daily basis. 400G and 800G are a reality today, with 1.6T coming soon. Just a few years ago, who would have believed that it was possible? Structured cabling offers the type of scalability and flexibility needed to accommodate these speed changes and the future growth of AI networks. ... Data centers are the “factory floor” of AI operations, and as AI continues to impact all areas of our lives, it will become increasingly integrated into emerging technologies like 5G, IoT, and Edge computing. This trend will only further emphasize the need for robust and scalable high-speed cabling systems.


Business Automation: Merging Technology and Skills

As technology progresses, business owners are eager for solutions that can handle repetitive tasks, freeing up time for their teams to focus on more strategic activities. One of the most effective strategies to achieve this is through business automation—a combination of technology and human skills that streamlines processes and boosts productivity. Business automation is designed to complement rather than replace human efforts. It helps teams reduce repetitive tasks, allowing them to concentrate on what matters most, such as improving customer satisfaction and driving innovation. By implementing automation, companies can increase productivity as routine jobs—like data entry and scheduling—are managed by automated systems. This shift not only saves time but also minimises errors associated with manual processes. Automation also enables better resource allocation. The insights gained from automated tools empower teams to make informed decisions and direct resources where they are needed most. Furthermore, real-time reporting offers valuable data that supports timely decision-making. Effective team management is crucial for any business, and automation can enhance productivity and accountability. 


Scaffolding for the South Africa National AI Policy Framework

The lack of specific responsibility assignment and cross-sectoral coordination mechanisms undermines the framework’s utility in guiding downstream activity. It is not too early to start articulating appropriate institutional arrangements, or encouraging debates between different models. A proposed multi-stakeholder platform to guide implementation lacks details about representation, participation criteria, and decision-making processes. This institutional uncertainty is further complicated by strained budgets and unclear funding mechanisms for new structures. Next, the framework’s lack of integration with existing policy landscapes is inadequate. There is a value in horizontal policy coherence across trade, competition, and other sectors. Reference to South Africa’s developmental policy course as articulated in the various Medium-Term Strategic Frameworks and in the National Development Plan 2030 would be helpful. There is a focus on transformation, development, and capacity-building, strengthening the intentions set out in the 2019 White Paper on Science, Technology and Innovation, which emphasizes ICT's role in further developmental goals within a socio-economic context that features high unemployment rates.


The DevSecOps Mindset: What It Is and Why You Need It

Navigating the delicate balance between speed and security is challenging for all organizations. That’s why so many are converting to the DevSecOps mindset. That said, it is not all smooth rolling when approaching the transition. Below are a few common factors that stand in the way of the security-first approach:Cultural Resistance: Teams may resist integrating security into fast-moving DevOps pipelines due to the extra initiative that individuals must take. Lack of Security Expertise: Many developers lack the deep security knowledge required to identify vulnerabilities early on due to the fast pace of technological innovations and creative threat actors. Limited Resources for Automation: Smaller organizations may struggle with the cost of automation tools. While DevSecOps incorporation might face a few hurdles, building a culture with regular security and automation brings many advantages that outweigh them. To name a few:Reduced Security Risks: By addressing security from the beginning, vulnerabilities get identified and resolved before they reach production. Organizations using DevSecOps practices experience a 50% reduction in security vulnerabilities compared to those that follow traditional development processes.


Talent in the new normal: How to manage fast-changing tech roles

The new workplace is one where automation and AI will be front and center. This has caught the imagination of today’s CIOs looking to move faster and scale. There’s no part of the business that can’t be automated. But how can the CIO build the culture, skills, and mindset to align with this new era of work, while also fostering growth? It will require CIOs to think differently. What might have worked five years ago will not cut it today. A good culture is key to an organization running effectively. This is why many of the biggest tech companies invest so heavily in making their offices a nice place to be. Culture is one of the intangible factors that make or break a professional’s happiness – and, by extension, their ability to work well. The CIO’s role in managing the organization’s growth is critical. CIOs understand how teams operate and, as a result, are well-placed to support their organization’s hiring and onboarding processes. Here, it’s not just about finding talent with the right skills, but also ensuring they meet the cultural needs of the organization. At a time when skills shortages are still a major challenge, what digital leaders should be looking for are candidates with an open mind and a desire to learn and grow. 



Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman