Daily Tech Digest - January 18, 2025

Beyond RAG: How cache-augmented generation reduces latency, complexity for smaller workloads

RAG is an effective method for handling open-domain questions and specialized tasks. It uses retrieval algorithms to gather documents that are relevant to the request and adds context to enable the LLM to craft more accurate responses. ... First, advanced caching techniques are making it faster and cheaper to process prompt templates. The premise of CAG is that the knowledge documents will be included in every prompt sent to the model. Therefore, you can compute the attention values of their tokens in advance instead of doing so when receiving requests. This upfront computation reduces the time it takes to process user requests. Leading LLM providers such as OpenAI, Anthropic and Google provide prompt caching features for the repetitive parts of your prompt, which can include the knowledge documents and instructions that you insert at the beginning of your prompt. ... And finally, advanced training methods are enabling models to do better retrieval, reasoning and question-answering on very long sequences. In the past year, researchers have developed several LLM benchmarks for long-sequence tasks, including BABILong, LongICLBench, and RULER. These benchmarks test LLMs on hard problems such as multiple retrieval and multi-hop question-answering. 


Turning Curiosity into a Career: The Power of OSINT

The beauty of OSINT is that you can start learning and practicing right now, even without a formal background in cybersecurity. Begin by familiarizing yourself with publicly available tools and resources. Social media platforms, search engines and public record databases are great starting points. From there, you can explore specialized tools like Google Dorking for advanced searches, reverse image search for photo analysis, and platforms like Maltego or SpiderFoot for more in-depth investigations. The OSINT Framework provides an extensive list of tools. If you're interested in pursuing OSINT as a career, consider taking advantage of free and paid online courses. Certifications such a GIAC Open Source Intelligence (GOSI) or Certified Ethical Hacker (CEH) can help build your credibility in the field. Participating in OSINT challenges or contributing to community projects is also a great way to hone your skills and showcase your abilities to potential employers. The demand for OSINT skills is growing as technology evolves and data becomes more accessible. Artificial intelligence and machine learning are enhancing OSINT capabilities, making it easier to analyze massive datasets and detect patterns. 


Five Trends That Will Drive Software Development in 2025

While organizations worldwide have quickly adopted AI for software development, many still struggle to measure its impact across diverse teams and business functions. Next year, organizations will become more sophisticated about measuring the return on their AI investments and better understand the value this technology can provide. This starts with looking more closely at specific outcomes. Instead of asking a broad question like, ‘How is AI helping my organization?’ leaders should study the impact of AI on tasks, such as test generation, documentation or language translation, and measure the gains in efficiency and productivity for these activities. ... While developers already work at breakneck speed today, technical debt is a persistent issue. The most worrying consequence of this debt is vulnerabilities that can creep into code and go unnoticed or unfixed. Next year, developers will expand their use of AI in software development to significantly reduce technical debt and increase the security of their code. Technical debt often occurs when developers choose an easy or quick solution instead of a better approach that takes longer. Vulnerabilities result when the code is poorly structured, not sufficiently reviewed or when testing is rushed or incomplete.


A Cloud Architect’s Guide to E-Commerce Data Storage

Latency, measured in microseconds, is the enemy of e-commerce storage systems, as slow-performing systems can mean hundreds of thousands of dollars in lost transactions and abandoned shopping carts. Your data platform must be reliable and highly performant even during fluctuating demand; events like Black Friday or unexpected social media trends can put a heavy load on your systems. Infrastructure that supports real-time data processing can be the deciding factor in staying competitive. These challenges necessitate a modern approach to storage — one that is software-defined, scalable and cloud-ready. ... Foundational elements of a modern e-commerce infrastructure consist of software-defined storage often combined with open-source environments like OpenStack, OpenShift, KVM and Kubernetes. The challenge for platform architects, whether building their e-commerce storage platform on premises or in the cloud, is to achieve scale and flexibility without compromising application and site performance. Many legacy storage systems, especially those architected for spinning disks, have performance limitations, resulting in data silos and expensive and time-consuming scaling strategies.


Demand and Supply Issues May Impact AI in 2025

Executives are asking for ROI numbers on analytics, data governance, and data quality programs, and they are demanding dollar values as opposed to “improving customer experience” or “increasing operational efficiency. ... Organizations have expected quick returns but not realized them because the initial expectations were unrealistic. Later comes the realization that the proper foundation has not been put in place. “Folks are saying they expect ROI in at least three years and more than 30% or so are saying that it would take three to five years when we’ve got two years of generative AI. [H]ow can you expect it to perform so quickly when you think it will take at least three years to realize the ROI? Some companies, some leadership, might be freaking out at this moment,” says Chaurasia. “I think the majority of them have spent half a million on generative AI in the last two years and haven’t gotten anything in return. That's where the panic is setting in.” Explaining ROI in terms of dollars is difficult, because it’s not as easy as multiplying time savings by individual salaries. Some companies are working to develop frameworks, however. ... If enterprises are reducing AI investments because the anticipated benefits aren’t being realized, vendors will pull back. 


4 Strategies To Thrive In A Manager-Less Workplace

One of the most important skills you can build is emotional regulation. Work can be intense, often frustrating. It’s easy to get caught up in your own emotions and—since emotions are catching—other people’s as well. Staying even-keeled pays off in maintaining good relationships with peers and also keeping yourself clear-headed so you can problem-solve when things go wrong. You can work on your emotional self-control by learning the tools of journaling and mindfulness. ... When you communicate powerfully, you navigate more easily. You get what you need more efficiently, you sell your ideas, and you build better relationships. All of these outcomes are useful when you’re on your own to build a case for getting promoted. The best way to build these skills is to practice. Volunteer to give large presentations and ask for feedback. Craft your emails and slack messages with an understanding of the receiver and ask them if they have suggestions for you. ... Your network inside your company can also provide the emotional support you would have gotten from your manager. And, when it comes time for you to be promoted, in most companies you need your colleagues to support you. Look around at your coworkers to see who are the most interesting, plugged-in, or effective. 


Dark Data: Recovering the Lost Opportunities

Dark data is the data collected and stored by an organization but is not analyzed or used for any essential purpose. It is frequently referred to as "data that lies in the shadows" because it is not actively used or essential in decision-making processes. ... Dark data can be highly beneficial to businesses as it offers insights and business intelligence that wouldn't be available otherwise. Companies that analyze dark data can better understand their customers, operations, and market trends. This enables them to make the best decisions and improve overall performance. Dark data can help organizations recoup lost opportunities by uncovering previously unknown patterns and trends. ... Once the dark data has been collected, it must be cleansed before further analysis. This may include deleting duplicate data, correcting errors, and formatting information to make it easier to work with. After the data has been cleansed and categorized, it can be examined to reveal patterns and insights that will aid decision-making. ... Collaborating with cross-functional teams, such as IT, data science, and business divisions, can assist in guaranteeing that dark data is studied in light of the organization's broader goals and objectives. 
The difference between “data deletion” and “data destruction” is critical to understand. “Data deletion” simply means removing a file from a system, making it appear inaccessible, while “data destruction” is a more thorough process that permanently erases data from a storage device, making it completely irretrievable. Deleting data isn’t enough. Without proper destruction protocols, “deleted” data remains vulnerable to breaches, regulatory compliance, and data recovery tools. ... A well-defined data destruction policy is your organization’s first line of defense. It outlines when, how, and under what circumstances data should be destroyed. Without a formal policy, data is often overlooked, forgotten, or destroyed haphazardly, creating compliance and security risks. To implement this, start by identifying the types of data your organization collects and classifies, such as PII or proprietary records. Define clear retention periods based on regulatory requirements like GDPR or CCPA and document the necessary steps, tools, and roles for secure destruction. Assign accountability to ensure oversight and follow-through. A formal policy isn’t just a “nice-to-have.” It’s a compliance requirement for many regulations, including GDPR and CCPA. 


Can GenAI Restore the ‘Humanity’ in Banking that Digital Has Removed?

Abbott is not arguing for turning customers directly over to GenAI — not yet. Even the most-advanced pioneers his firm works with aren’t risking that. ... Abbott believes GenAI, as it becomes a standard part of banking, will play out in a similar way. Employees will adapt, often more slowly than anticipated, but they will change. This will lead to shifts in the role of management vis-à-vis employees empowered by GenAI. Abbott says this will likely take a similar path to that seen as banks adopted agile development. Young people came into the bank using the tools, just as many are already experimenting with GenAI. Banking leaders liked the idea of their organizations "doing agile." But what Abbott calls "the frozen middle" management tier had to grin and plunge into unfamiliar turf. "That frozen middle will have to thaw out and find a new way of working," says Abbott. Bank leadership must help by providing tools and opportunities for trying it out. One of the biggest early challenges will be tempering the GenAI tech to the task. Abbott explains that GenAI can be tuned to be "low temperature" or "high temperature," or somewhere in between. The former refers to GenAI working with tight guardrails, such as in sensitive areas like dispute management. 


Federated learning: The killer use case for generative AI

Federated learning is emerging as a game-changing approach for enterprises looking to leverage the power of LLMs while maintaining data privacy and security. Rather than moving sensitive data to LLM providers or building isolated small language models (SLMs), federated learning enables organizations to train LLMs using their private data where it resides. Everyone who worries about moving private enterprise data to a public space, such as uploading it to an LLM, can continue to have “private data.” Private data may exist on a public cloud provider or in your data center. The real power of federation comes from the tight integration between private enterprise data and sophisticated LLM capabilities. This integration allows companies to leverage their proprietary information and broader knowledge in models like GPT-4 or Google Gemini without compromising security. ... As enterprises struggle to balance AI capabilities against data privacy concerns, federated learning provides the best of both worlds. Also, it allows for a choice of LLMs. You can leverage LLMs that are not a current part of your ecosystem but may be a better fit for your specific application. For instance, LLMs that focus on specific verticals are becoming more popular. 



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest January 17, 2025

The Architect’s Guide to Understanding Agentic AI

All business processes can be broken down into two planes: a control plane and a tools plane. See the graphic below. The tools plane is a collection of APIs, stored procedures and external web calls to business partners. However, for organizations that have started their AI journey, it could also include calls to traditional machine learning models (wave No. 1) and LLMs (wave No. 2) operating in “one-shot” mode. ... The promise of agentic AI is to use LLMs with full knowledge of an organization’s tools plane and allow them to build and execute the logic needed for the control plane. This can be done by providing a “few-shot” prompt to an LLM that has been fine-tuned on an organization’s tools plane. Below is an example of a “few-shot” prompt that answers the same hypothetical question presented earlier. This is also known as letting the LLM think slowly. ... If agentic AI still seems to be made up of too much magic, then consider the simple example below. Every developer who has to write code daily probably asks an LLM a question similar to the one below. ... Agentic AI is the next logical evolution of AI. It is based on capabilities with a solid footing in AI’s first and second waves. The promise is the use of AI to solve more complex problems by allowing them to plan, execute tasks and revise— in other words, allowing them to think slowly. This also promises to produce more accurate responses.


AI datacenters putting zero emissions promises out of reach

Datacenters' use of water and land are other bones of contention, which in combination with their reliance on tax breaks and the limited number of local jobs they deliver, will see them face growing opposition from local residents and environmental groups. Uptime highlights that many governments have set targets for GHG emissions to become net-zero by a set date, but warns that because the AI boom look set to test power availability, it will almost certainly put these pledges out of reach. ... Many governments seem convinced of the economic benefits promised by AI at the expense of other concerns, the report notes. The UK is a prime example, this week publishing the AI Opportunities Action Plan and vowing to relax planning rules to prioritize datacenter builds. ... Increasing rack power presents several challenges, the report warns, including the sheer space taken up by power distribution infrastructure such as switchboards, UPS systems, distribution boards, and batteries. Without changes to the power architecture, many datacenters risk becoming an electrical plant built around a relatively small IT room. Solving this will call for changes such as medium-voltage (over 1 kV) distribution to the IT space and novel power distribution topologies. However, this overhaul will take time to unfold, with 2025 potentially a pivotal year for investment to make this possible.


State of passkeys 2025: passkeys move to mainstream

One of the critical factors driving passkeys into mainstream is the full passkey-readiness of devices, operating systems and browsers. Apple (iOS, macOS, Safari), Google (Android, Chrome) and Microsoft (Windows, Edge) have fully integrated passkey support across their platforms: Over 95 percent of all iOS & Android devices are passkey-ready; and Over 90 percent of all iOS & Android devices have passkey functionality enabled. With Windows soon supporting synced passkeys, all major operating systems ensure users can securely and effortlessly access their credentials across devices. ... With full device support, a polished UX, growing user familiarity, and a proven track record among early adopter implementations, there’s no reason for businesses to delay adopting passkeys. The business advantages of passkeys are compelling. Companies that previously relied on SMS-based authentication can save considerably on SMS costs. Beyond that, enterprises adopting passkeys benefit from reduced support overhead (since fewer password resets are needed), lower risk of breaches (thanks to phishing-resistance), and optimized user flows that improve conversion rates. Collectively, these perks make a convincing business case for passkeys.


Balancing usability and security in the fight against identity-based attacks

AI and ML are a double-edged sword in cybersecurity. On one hand, cybercriminals are using these technologies to make their attacks faster and wiser. They can create highly convincing phishing emails, generate deepfake content, and even find ways to bypass traditional security measures. For example, generative AI can craft emails or videos that look almost real, tricking people into falling for scams. On the flip side, AI and ML are also helping defenders. These technologies allow security systems to quickly analyze vast amounts of data, spotting unusual behavior that might indicate compromised credentials. ... Targeted security training can be useful but generally you want to reduce the human dependency as much as possible. This is why controls that can meet a user where they are at is critical. If you can deliver point-in-time guidance, or straight up technically prevent something like a user entering their password into a phishing site, it significantly reduces the dependency on the human to make the right decision unassisted every time. When you consider how hard it can be for even security professionals to spot the more sophisticated phishing sites, it’s essential that we help people out as much as possible with technical controls.


Understanding Leaderless Replication for Distributed Data

Leaderless replication is another fundamental replication approach for distributed systems. It alleviates problems of multi-leader replication while, at the same time, it introduces its own problems. Write conflicts in multi-leader replication are tackled in leaderless replication with quorum-based writes and systematic conflict resolution. Cascading failures, synchronization overhead, and operational complexity can be handled in leaderless replication via its decentralized architecture. Removing leaders can simplify cluster management, failure handling,g and recovery mechanisms. Any replica can handle writes/reads. ... Direct writes, and coordination-based replication are the most common approaches in leaderless replication. In the first approach, clients write directly to node replicas, while in the second approach, there exist coordinator-mediated writes. It is worth mentioning that, unlike the leader-follower concept, coordinators in leaderless replication do not enforce a particular ordering of writes. ... Failure handling is one of the most challenging aspects of both approaches. While direct writes provide better theoretical availability, they can be problematic during failure scenarios. Coordinator-based systems can provide clearer failure semantics but at the cost of potential coordinator bottlenecks.


Blockchain in Banking: Use Cases and Examples

Bitcoin has entered a space usually reserved for gold and sovereign bonds: national reserves. While the U.S. Federal Reserve maintains that it cannot hold Bitcoin under current regulations, other financial systems are paying close attention to its potential role as a store of value. On the global stage, Bitcoin is being viewed not just as a speculative asset but as a hedge against inflation and currency volatility. Governments are now debating whether digital assets can sit alongside gold bars in their vaults. Behind all this activity lies blockchain - providing transparency, security, and a framework for something as ambitious as a digital reserve currency. ... Financial assets like real estate, investment funds, or fine art are traditionally expensive, hard to divide, and slow to transfer. Blockchain changes this by converting these assets into digital tokens, enabling fractional ownership and simplifying transactions. UBS launched its first tokenized fund on the Ethereum blockchain, allowing investors to trade fund shares as digital assets. This approach reduces administrative costs, accelerates settlements, and improves accessibility for investors. Additionally, one of Central and Eastern Europe’s largest banks has tokenized fine art on Aleph Zero blockchain. This enables fractional ownership of valuable art pieces while maintaining verifiable proof of ownership and authenticity.


Decentralized AI in Edge Computing: Expanding Possibilities

Federated learning enables decentralized training of AI models directly across multiple edge devices. This approach eliminates the need to transfer raw data to a central server, preserving privacy and reducing bandwidth consumption. Models are trained locally, with only aggregated updates shared to improve the global system. ... Localized data processing empowers edge devices to conduct real-time analytics, facilitating faster decision-making and minimizing reliance on central frameworks. This capability is fundamental for applications such as autonomous vehicles and industrial automation, where even milliseconds can be vital. ... Blockchain technology is pivotal in decentralized AI for edge computing by providing a secure, immutable ledger for data sharing and task execution across edge nodes. It ensures transparency and trust in resource allocation, model updates, and data verification processes. ... By processing data directly at the edge, decentralized AI removes the delays in sending data to and from centralized servers. This capability ensures faster response times, enabling near-instantaneous decision-making in critical real-time applications. ... Decentralized AI improves privacy protocols by empowering the processing of sensitive information locally on the device rather than sending it to external servers.


The Myth of Machine Learning Reproducibility and Randomness

The nature of ML systems contributes to the challenge of reproducibility. ML components implement statistical models that provide predictions about some input, such as whether an image is a tank or a car. But it is difficult to provide guarantees about these predictions. As a result, guarantees about the resulting probabilistic distributions are often given only in limits, that is, as distributions across a growing sample. These outputs can also be described by calibration scores and statistical coverage, such as, “We expect the true value of the parameter to be in the range [0.81, 0.85] 95 percent of the time.” ... There are two basic techniques we can use to manage reproducibility. First, we control the seeds for every randomizer used. In practice there may be many. Second, we need a way to tell the system to serialize the training process executed across concurrent and distributed resources. Both approaches require the platform provider to include this sort of support. ... Despite the importance of these exact reproducibility modes, they should not be enabled during production. Engineering and testing should use these configurations for setup, debugging and reference tests, but not during final development or operational testing.


The High-Stakes Disconnect For ICS/OT Security

ICS technologies, crucial to modern infrastructure, are increasingly targeted in sophisticated cyber-attacks. These attacks, often aimed at causing irreversible physical damage to critical engineering assets, highlight the risks of interconnected and digitized systems. Recent incidents like TRISIS, CRASHOVERRIDE, Pipedream, and Fuxnet demonstrate the evolution of cyber threats from mere nuisances to potentially catastrophic events, orchestrated by state-sponsored groups and cybercriminals. These actors target not just financial gains but also disruptive outcomes and acts of warfare, blending cyber and physical attacks. Additionally, human-operated Ransomware and targeted ICS/OT ransomware pose concerns being on the rise in recent times. ... Traditional IT security measures, when applied to ICS/OT environments, can provide a false sense of security and disrupt engineering operations and safety. Thus, it is important to consider and prioritize the SANS Five ICS Cybersecurity Critical Controls. This freely available whitepaper sets forth the five most relevant critical controls for an ICS/OT cybersecurity strategy that can flex to an organization's risk model and provides guidance for implementing them.


Execs are prioritizing skills over degrees — and hiring freelancers to fill gaps

Companies are adopting more advanced approaches to assessing potential and current employee skills, blending AI tools with hands-on evaluations, according to Monahan. AI-powered platforms are being used to match candidates with roles based on their skills, certifications, and experience. “Our platform has done this for years, and our new UMA (Upwork’s Mindful AI) enhances this process,” she said. Gartner, however, warned that “rapid skills evolutions can threaten quality of hire, as recruiters struggle to ensure their assessment processes are keeping pace with changing skills. Meanwhile, skills shortages place more weight on new hires being the right hires, as finding replacement talent becomes increasingly challenging. Robust appraisal of candidate skills is therefore imperative, but too many assessments can lead to candidate fatigue.” ... The shift toward skills-based hiring is further driven by a readiness gap in today’s workforce. Upwork’s research found that only 25% of employees feel prepared to work effectively alongside AI, and even fewer (19%) can proactively leverage AI to solve problems. “As companies navigate these challenges, they’re focusing on hiring based on practical, demonstrated capabilities, ensuring their workforce is agile and equipped to meet the demands of a rapidly evolving business landscape,” Monahan said.



Quote for the day:

“If you set your goals ridiculously high and it’s a failure, you will fail above everyone else’s success.” -- James Cameron

Daily Tech Digest - January 16, 2025

How DPUs Make Collaboration Between AppDev and NetOps Essential

While GPUs have gotten much of the limelight due to AI, DPUs in the cloud are having an equally profound impact on how applications are delivered and network functions are designed. The rise of DPU-as-a-Service is breaking down traditional silos between AppDev and NetOps teams, making collaboration essential to fully unlock DPU capabilities. DPUs offload network, security, and data processing tasks, transforming how applications interact with network infrastructure. AppDev teams must now design applications with these offloading capabilities in mind, identifying which tasks can benefit most from DPUs—such as real-time data encryption or intensive packet processing. ... AppDev teams must explicitly design applications to leverage DPU-accelerated encryption, while NetOps teams need to configure DPUs to handle these workloads efficiently. This intersection of concerns creates a natural collaboration point. The benefits of this collaboration extend beyond security. DPUs excel at packet processing, data compression, and storage operations. When AppDev and NetOps teams work together, they can identify opportunities to offload compute-intensive tasks to DPUs, dramatically improving application performance. 


The CFO may be the CISO’s most important business ally

“Cybersecurity is an existential threat to every company. Gone are the days where CFOs could only be fired if they ran out of money, cooked the books, or had a major controls outage,” he said. “Lack of adequate resourcing of cybersecurity is an emerging threat to their very existence.” This sentiment reflects the reality that for most organizations cyber threat is the No. 1 business risk today, and this has significant implications for the strategic survival of the enterprise. It’s time for CISOs and CFOs to address the natural barriers to their relationship and develop a strategic partnership for the good of the company. ... CISOs should be aware of a few key strategies for improving collaboration with their CFO counterparts. The first is reverse mentoring. Because CFOs and CISOs come from differing perspectives and lead domains rife with terminology and details that can be quite foreign to the other, reverse mentoring can be important for building a bridge between the two. In such a relationship, the CISO can offer insights into cybersecurity, while simultaneously learning to communicate in the CFO’s financial language. This mutual learning creates a more aligned approach to organizational risk. Second, CISOs must also develop their commercial perspective.


Establishing a Software-Based, High-Availability Failover Strategy for Disaster Mitigation and Recovery

No one should be surprised that cloud services occasionally go offline. If you think of the cloud as “someone else’s computer,” then you recognize there are servers and software behind it all. Someone else is doing their best to keep the lights on in the face of events like human error, natural disasters, and DDoS and other types of cyberattacks. Someone else is executing their disaster response and recovery plan. While the cloud may well be someone else’s computer, when there is a cloud outage that affects your operations, it is your problem. You are at the mercy of someone else to restore services so you can get back online. It doesn’t have to be that way. Cloud-dependent organizations can adopt strategies that allow them to minimize the risk someone else’s outage will knock them offline. One such strategy is to take advantage of hybrid or multi-cloud architecture to achieve operational resiliency and high availability through service redundancy through SANless clustering. Normally a storage area network (SAN) uses local storage to configure clustered nodes on-premises, in the cloud, and to a disaster recovery site. It’s a proven approach, but because it is hardware dependent, it is costly in terms of dollars and computing resources, and comes with additional management demands.


Trusted Apps Sneak a Bug Into the UEFI Boot Process

UEFI is a kind of sacred space — a bridge between firmware and operating system, allowing a machine to boot up in the first place. Any malware that invades this space will earn a dogged persistence through reboots, by reserving its own spot in the startup process. Security programs have a harder time detecting malware at such a low level of the system. Even more importantly, by loading first, UEFI malware will simply have a head start over those security checks that it aims to avoid. Malware authors take advantage of this order of operations by designing UEFI bootkits that can hook into security protocols, and undermine critical security mechanisms like UEFI Secure Boot or HVCI, Windows' technology for blocking unsigned code in the kernel. To ensure that none of this can happen, the UEFI Boot Manager verifies every boot application binary against two lists: "db," which includes all signed and trusted programs, and "dbx," including all forbidden programs. But when a vulnerable binary is signed by Microsoft, the matter is moot. Microsoft maintains a list of requirements for signing UEFI binaries, but the process is a bit obscure, Smolár says. "I don't know if it involves only running through this list of requirements, or if there are some other activities involved, like manual binary reviews where they look for not necessarily malicious, but insecure behavior," he says.


How CISOs Can Build a Disaster Recovery Skillset

In a world of third-party risk, human error, and motivated threat actors, even the best prepared CISOs cannot always shield their enterprises from all cybersecurity incidents. When disaster strikes, how can they put their skills to work? “It is an opportunity for the CISO to step in and lead,” says Erwin. “That's the most critical thing a CISO is going to do in those incidents, and if the CISO isn't capable doing that or doesn't show up and shape the response, well, that's an indication of a problem.” CISOs, naturally, want to guide their enterprises through a cybersecurity incident. But disaster recovery skills also apply to their own careers. “I don't see a world where CISOs don't get some blame when an incident happens,” says Young. There is plenty of concern over personal liability in this role. CISOs must consider the possibility of being replaced in the wake of an incident and potentially being held personally responsible. “Do you have parachute packages like CEOs do in their corporate agreements for employability when they're hired?” Young asks. “I also see this big push of not only … CISOs on the D&O insurance, but they're also starting to acquire private liability insurance for themselves directly.”


Site Reliability Engineering Teams Face Rising Challenges

While AI adoption continues to grow, it hasn't reduced operational burdens as expected. Performance issues are now considered as critical as complete outages. Organizations are also grappling with balancing release velocity against reliability requirements. ... Daoudi suspects that there are a series of contributing factors that have led to the unexpected rise in toil levels. The first is AI systems maintenance: AI systems themselves require significant maintenance, including updating models and managing GPU clusters. AI systems also often need manual supervision due to subtle and hard-to-predict errors, which can increase the operational load. Additionally, the free time created by expediting valuable activities through AI may end up being filled with toilsome tasks, he said. "This trend could impact the future of SRE practices by necessitating a more nuanced approach to AI integration, focusing on balancing automation with the need for human oversight and continuous improvement," Daoudi said. Beyond AI, Daoudi also suspects that organizations are incorrectly evaluating toolchain investments. In his view, despite all the investments in inward-focused application performance management (APM) tools, there are still too many incidents, and the report shows a sentiment for insufficient observability instrumentation.


The Hidden Cost of Open Source Waste

Open source inefficiencies impact organizations in ways that go well beyond technical concerns. First, they drain productivity. Developers spend as much as 35% of their time untangling dependency issues or managing vulnerabilities — time that could be far better spent building new products, paying down technical debt, or introducing automation to drive cost efficiencies. ... Outdated dependencies compound the challenge. According to the report, 80% of application dependencies remain un-upgraded for over a year. While not all of these components introduce critical vulnerabilities, failing to address them increases the risk of undetected security gaps and adds unnecessary complexity to the software supply chain. This lack of timely updates leaves development teams with mounting technical debt and a higher likelihood of encountering issues that could have been avoided. The rapid pace of software evolution adds another layer of difficulty. Dependencies can become outdated in weeks, creating a moving target that’s hard to manage without automation and actionable insights. Teams often play catch-up, deepening inefficiencies and increasing the time spent on reactive maintenance. Automation helps bridge this gap by scanning for risks and prioritizing high-impact fixes, ensuring teams focus on the areas that matter most.


The Virtualization Era: Opportunities, Challenges, and the Role of Hypervisors

Choosing the most appropriate hypervisor requires thoughtful consideration of an organization’s immediate needs and long-term goals. Scalability is a crucial factor, as the selected solution must address current workloads and seamlessly adapt to future demands. A hypervisor that integrates smoothly with an organization’s existing IT infrastructure reduces the risks of operational disruptions and ensures a cost-effective transition. Equally important is the financial aspect, where businesses must look beyond the initial licensing fees to account for potential hidden costs, such as staff training, ongoing support, and any necessary adjustments to workflows. The quality of support the vendor provides, coupled with the strength of the user community, can significantly influence the overall experience, offering critical assistance during implementation and beyond. For many businesses, partnering with Managed Service Providers (MSPs) brings an added layer of expertise, ensuring that the chosen solution delivers maximum value while minimizing risk. The ongoing evolution and transformation of the virtualization market presents both challenges and opportunities. As the foundation for IT efficiency and flexibility, hypervisors remain central to these changes.

 

DORA’s Deadline Looms: Navigating the EU’s Mandate for Threat Led Penetration Testing

It’s hard to defend yourself, if you have no idea what you’re up against, and history and countless news stories are evidence that trying to defend against all manner of digital threat is a fool’s errand. As such, the first step to approaching DORA compliance is profiling not only the threat actors that target the financial services sector, but specifically which actors, and by what Tactics Techniques and Procedures (TTPs), you are likely to be attacked. However, first before you can determine how an actor may view and approach you, you need to know who you are. So, the first profile that must be built is of your own business. Not just financial services, but what sector/aspect, what region, and finally what is the specific risk profile based on the critical assets in organizational, and even partner, infrastructures. The second profile begins with the current population of known actors that target the financial services industry. It then moves to narrowing to the actors known to be aligned with the specific targeting profile. From there, leveraging industry standard models such as the MITRE ATT&CK framework, a graph is created of each actor/group’s understood goals and TTPs, including their traditional and preferred methods of access and exploitation, as well as their capabilities for evasion, persistence and command and control.


With AGI looming, CIOs stay the course on AI partnerships

“The immediate path for CIOs is to leverage gen AI for augmentation rather than replacement — creating tools that help human teams make smarter, faster decisions,” Nardecchia says. “There are very promising results with causal AI and AI agents that give an autonomous-like capability and most solutions still have a human in the loop.” Matthew Gunkel, CIO of IT Solutions at the University of California at Riverside, agrees that IT organizations should keep moving forward regardless of the growing delta between AI technology milestones and actual AI implementations. ... “The rapid advancements in AI technology, including projections for AGI and ACI, present a paradox: While the technology races ahead, enterprise adoption remains in its infancy. This divergence creates both challenges and opportunities for CIOs, employees, and AI vendors,” Priest says. “Rather than speculating on when AGI/ACI will materialize, CIOs would be best served to focus on what preparation is required to be ready for it and to maximize the value from it.” Sid Nag, vice president at Gartner, agrees that CIOs should train their attention on laying the foundation for AI and addressing important matters such as privacy, ethics, legal issues, and copyright issues, rather than focus on AGI advances.



Quote for the day:

"When you practice leadership,The evidence of quality of your leadership, is known from the type of leaders that emerge out of your leadership" -- Sujit Lalwani

Daily Tech Digest - January 15, 2025

Passkeys: they're not perfect but they're getting better

Users are largely unsure about the implications for their passkeys if they lose or break their device, as it seems their device holds the entire capability to authenticate. To trust passkeys as a replacement for the password, users need to be prepared and know what to do in the event of losing one – or all – of their devices. ... Passkeys are ‘long life’ because users can’t forget them or create one that is weak, so if they’re done well there should be no need to reset or update them. As a result, there’s an increased likelihood that at some point a user will want to move their passkeys to the Credential Manager of a different vendor or platform. This is currently challenging to do, but FIDO and vendors are actively working to address this issue and we wait to see support for this take hold across the market. ... For passkey-protected accounts, potential attackers are now more likely to focus on finding weaknesses in account recovery and reset requests – whether by email, phone or chat – and pivot to phishing for recovery keys. These processes need to be sufficiently hardened by providers to prevent trivial abuse by these attackers and to maintain the security benefits of using passkeys. Users also need to be educated on how to spot and report abuse of these processes before their accounts are compromised.


Securing Payment Software: How the PCI SSF Modular System Enhances Flexibility and Security

The framework was introduced to replace the aging Payment Application Data Security Standard (PA-DSS), which primarily focused on payment application security. As software development technologies and methodologies rapidly evolved, the need for a dynamic and adaptable security standard became increasingly apparent. Consequently, this realization prompted the creation of the PCI SSF. As a result, the PCI SSF encompasses a broader range of security requirements specifically tailored for modern software environments. ... The modular system of the PCI SSF is specifically designed to offer both flexibility and scalability, thereby enabling organizations to address their specific security needs based on their unique software environments. In addition, the modular approach allows organizations to select and implement only the components relevant to their software, which, in turn, simplifies the process of achieving and maintaining compliance. ... The PCI SSF’s modular system marks a transformative step in payment software security, effectively balancing adaptability with comprehensive protection against evolving cyber threats. Moreover, its flexible, scalable, and comprehensive approach allows organizations to tailor their security efforts to their unique needs, thereby ensuring robust protection for payment data.


The cloud cost wake-up call I predicted

Cloud computing starts as a flexible and budget-friendly option, especially with its enticing pay-per-use model. However, unchecked growth can turn this dream into a financial nightmare due to the complexities the cloud introduces. According to the Flexera State of the Cloud Report, 87% of organizations have adopted multicloud strategies, complicating cost management even more by scattering workloads and expenses across various platforms. The rise of cloud-native applications and microservices has further complicated cost management. These systems abstract physical resources, simplifying development but making costs harder to predict and control. Recent studies have revealed that 69% of CPU resources in container environments go unused, a direct contradiction of optimal cost management practices. Although open-source tools like Prometheus are excellent for tracking usage and spending, they often fall short as organizations scale. ... A critical component of effective cloud cost management is demystifying cloud pricing models. Providers often lay out their pricing structures in great detail, but translating them into actual costs can be difficult. A lack of understanding can lead to spiraling costs.


Using cognitive diversity for stronger, smarter cyber defense

Cognitive biases significantly influence decision-making during cybersecurity incidents by framing how individuals interpret information, assess risks, and respond to threats. ... Integrating cognitive science into cybersecurity tools involves understanding how human cognitive processes – such as perception, memory, decision-making, and problem-solving – affect security tasks. Designing user-friendly tools requires aligning cognitive models with diverse user behaviors while managing cognitive load, ensuring usability without compromising security, and adapting to the fast-changing cybersecurity landscape. Interfaces must cater to varying skill levels, promote awareness, and support effective decision-making, all while addressing ethical considerations like privacy and bias. Interdisciplinary collaboration between psychology, computer science, and cybersecurity experts is essential but challenging due to differences in expertise and communication styles. ... Cognitive diversity can frequently divert resources or distract from present, immediate or emerging threats. Focus on the things that are likely to happen. Implement defensive measures which require little resource while more complex measures are prioritized.


Next-gen Ethernet standards set to move forward in 2025

Beyond the big-ticket items of higher bandwidth and AI, a key activity in any year for Ethernet is interoperability testing for all manner of existing and emerging specifications. 200 Gigabits per second per lane is an important milestone on the path to an even higher bandwidth Ethernet specification that will exceed 1 Terabit per second. ... With 800GbE now firmly established, adoption and expansion into ever larger bandwidth will be a key theme in 2025. There will be no shortage of vendors offering 800 GbE equipment in 2025, but when it comes to Ethernet standards, focus will be on 1.6 Terabits/second Ethernet. “As 800GbE has come to market, the next speed for Ethernet is being talked about already,” Martin Hull, vice president and general manager for cloud and AI platforms at Arista Networks, told Network World. “1.6Tb Ethernet is being discussed in terms of the optics, the form factors and use cases, and we expect industry leaders to be trialing 1.6T systems towards the end of 2025.” ... “High-speed computing requires high bandwidth and reliable interconnect solutions,” Rodgers said. “However, high-speed also means high power and higher heat, placing more demands on the electrical grid and resources and creating a demand for new options.” That’s where LPOs will fit in.


Stop wasting money on ineffective threat intelligence: 5 mistakes to avoid

“CTI really needs to fall underneath your risk management and if you don’t have a risk management program you need to identify that (as a priority),” says Ken Dunham, cyber threat director for the Qualys Threat Research Unit. “It really should come down to: what are the core things you’re trying to protect? Where are your crown jewels or your high value assets?” Without risk management to set those priorities, organizations will not be able to appropriately set requirements for intelligence collection that will have them gather the kind of relevant sources that pertain to their most valuable assets. ... Bad intelligence can often be worse than none, leading to a lot of time wasted by analysts to validate and contextualize poor quality feeds. Even worse, if this work isn’t done appropriately, poor quality data could potentially even lead to misguided choices at the operational or strategic level. Security leaders should be tasking their intelligence team with regularly reviewing the usefulness of their sources based on a few key attributes. ... Even if CTI is doing an excellent job collecting the right kind of quality intelligence that its stakeholders are asking for, all that work can go for naught if it isn’t appropriately routed to the people that need it — in the format that makes sense for them.


Exposure Management: A Strategic Approach to Cyber Security Resource Constraint

XM is a proactive and integrated approach that provides a comprehensive view of potential attack surfaces and prioritises security actions based on an organisation’s specific context. It’s a process that combines cloud security posture, identity management, internal hosts, internet-facing hosts and threat intelligence into a unified framework, enabling security teams to anticipate potential attack vectors and fortify their defences effectively. Unlike traditional security measures, XM takes an “outside-in” approach, assessing how attackers might exploit vulnerabilities across interconnected systems. This shift in mindset is crucial for identifying and prioritising the most significant threats. By focusing on the most critical vulnerabilities and potential attack paths, XM allows security teams to allocate resources more efficiently and enhance their overall security posture. ... By providing a unified view of the entire attack path, XM improves an organisation’s ability to manage security risks. This unified view allows security teams to understand how vulnerabilities can be exploited and prioritise those that pose the greatest risk. Security teams are then able to guarantee efficient resource allocation and focus on threats with the most significant impact on business operations.


How GenAI is Exposing the Limits of Data Centre Infrastructure

Energy intensive Graphics Processing Units (GPUs) that power AI platforms require five to 10 times more energy than Central Processing Units (CPUs), because of the larger number of transistors. This is already impacting data centres. There are also new, cost-effective design methodologies incorporating features such as 3D silicon stacking, which allows GPU manufacturers to pack more components into a smaller footprint. This again increases the power density, meaning data centres need more energy, and create more heat. Another trend running in parallel is a steady fall in TCase (or Case Temperature) in the latest chips. TCase is the maximum safe temperature for the surface of chips such as GPUs. It is a limit set by the manufacturer to ensure the chip will run smoothly and not overheat, or require throttling which impacts performance. On newer chips, T Case is coming down from 90 to 100 degrees Celsius to 70 or 80 degrees, or even lower. This is further driving the demand for new ways to cool GPUs. As a result of these factors, air cooling is no longer doing the job when it comes to AI. It is not just the power of the components, but the density of those components in the data centre. Unless servers become three times bigger than they were before, efficient heat removal is needed. 


The Configuration Crisis and Developer Dependency on AI

As our IT infrastructure grows ever more modular, layered and interconnected, we deal with myriad configurable parts — each one governed by a dense thicket of settings. All of our computers — whether in our pockets, on our desks or in the cloud — have a bewildering labyrinth of components with settings to discover and fiddle with, both individually and in combination. ... A couple of strategies I’ve mentioned before bear repeating. One is the use of screenshots, which are now a powerful index in the corpus of synthesized knowledge. Like all forms of web software, the cloud platforms’ GUI consoles present a haphazard mix of UX idioms. A maneuver that is conceptually the same across platforms will often be expressed using very different affordances. ... A couple of strategies I’ve mentioned before bear repeating. One is the use of screenshots, which are now a powerful index in the corpus of synthesized knowledge. Like all forms of web software, the cloud platforms’ GUI consoles present a haphazard mix of UX idioms. A maneuver that is conceptually the same across platforms will often be expressed using very different affordances. AIs are pattern recognizers that can help us see and work with the common underlying patterns.


From project to product: Architecting the future of enterprise technology

Modern enterprise architecture requires thinking like an urban planner rather than a building inspector. This means creating environments that enable innovation while ensuring system integrity and sustainability. ... Just as urban planners need to develop a shared vocabulary with city officials, developers and citizens, enterprise architects must establish a common language that bridges technical and business domains. Complex ideas that remain purely verbal often get lost or misunderstood. Documentation and diagrams transform abstract discussions into something tangible. By articulating fitness functions — automated tests tied to specific quality attributes like reliability, security or performance — teams can visualize and measure system qualities that align with business goals. ... Technology governance alone will often just inform you of capability gaps, tech debt and duplication — this could be too late! Enterprise architects must shift their focus to business enablement. This is much more proactive in understanding the business objectives and planning and mapping the path for delivery. ... Just as cities must evolve while preserving their essential character, modern enterprise architecture requires built-in mechanisms for sustainable change. 



Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein

Daily Tech Digest - January 14, 2025

Why Your Business May Want to Shift to an Industry Cloud Platform

Industry cloud services typically embed the data model, processes, templates, accelerators, security constructs, and governance controls required by the adopter's industry, says Shriram Natarajan, a director at technology research and advisory firm ISG, in an online interview. "This [approach] allows faster development of new functionality, better security and governance, and an enhanced and user/stakeholder experience." ... Enterprises spanning many industries can benefit significantly by moving to an industry cloud platform, Campbell says. "Businesses that are faced with many regulations and operational requirements can especially benefit from the specialized services industry cloud platforms," he notes, adding that many industry cloud platforms are preconfigured to meet specific needs, which can help accelerate the time to value realized. Many enterprises have a blinkered view on verticalized solutions, Natarajan says. "They tend to see the platforms they already have in-house and look for solutions that these platforms provide." He believes that enterprise IT and business teams can both benefit from looking at the landscape of verticalized industry cloud platforms.


FRAML Reality Check: Is Full Integration Really Practical?

While integration between AML and fraud teams is a desirable goal, experts say it should not be viewed as the best solution. Paul Dunlop, insider risk consultant at a financial services firm, stressed the importance of collaboration over integration. "I am against the oversimplification of fraud and AML integration. Banking risks are multifaceted, involving not just fraud and AML but also cybersecurity, privacy and other domains," Dunlop said. "Integration decision should be assessed based on the bank's maturity level, regulatory environment and unique operational needs." "Cost should not be the sole factor behind this decision. One must assess operational and risk management trade-offs," he said. Meng Liu, senior analyst at Forrester, said that despite AML and fraud being two distinct functions at present, the trend toward more consolidated and integrated financial crime management is real. ... Despite the differences in fraud and AML teams, some use cases, such as scams, human trafficking and child exploitation, cry out for better collaboration, Mitchell said. "These require shared data and aligned strategies." But high-volume fraud detection such as check and card fraud is less suited for joint efforts due to operational complexity.


Ransomware abuses Amazon AWS feature to encrypt S3 buckets

In the attacks by Codefinger, the threat actors used compromised AWS credentials to locate victim's keys with 's3:GetObject' and 's3:PutObject' privileges, which allow these accounts to encrypt objects in S3 buckets through SSE-C. The attacker then generates an encryption key locally to encrypt the target's data. Since AWS doesn't store these encryption keys, data recovery without the attacker's key is impossible, even if the victim reports unauthorized activity to Amazon. "By utilizing AWS native services, they achieve encryption in a way that is both secure and unrecoverable without their cooperation," explains Halcyon. Next, the attacker sets a seven-day file deletion policy using the S3 Object Lifecycle Management API and drops ransom notes on all affected directories that instruct the victim to pay ransom on a given Bitcoin address in exchange for the custom AES-256 key. ... Halcyon also suggests that AWS customers set restrictive policies that prevent the use of SSE-C on their S3 buckets. Concerning AWS keys, unused keys should be disabled, active ones should be rotated frequently, and account permissions should be kept at the minimum level required.


How AI and ML are transforming digital banking security

By continuously learning from new data, ML improves over time, adapting to the organization’s needs and the ever-evolving fraud tactics. This supports reducing false positives, ensuring legitimate transactions proceed smoothly while maintaining security. Predictive analytics also help identify potential threats before they materialize, and fraud scoring prioritizes high-risk activities for action. AI/ML-powered systems are scalable and effective against sophisticated threats, such as synthetic identity fraud and account takeovers, and can monitor multiple banking channels simultaneously. They automate detection, lowering operational costs, and providing seamless customer experiences, thereby enhancing trust. However, nothing is a silver bullet and considerations must be made to things such as algorithm bias, data privacy concerns, and the need for explainable models persist. Still, despite these potential hurdles, AI and ML are reshaping digital banking security, equipping financial institutions with proactive tools to counter fraud while safeguarding customer trust and regulatory compliance. ... Advanced technologies like AI and ML are helping institutions monitor transactions in real time, detecting anomalies and preventing fraud without directly involving users. Meanwhile, encryption and tokenization protect sensitive data, ensuring transactions remain secure in the background.


The Evolution of Business Systems in the Digital Era

Systems of Record (SORs) serve as the foundation of organizational infrastructure, storing essential data such as customer information, financial transactions, and operational processes. These systems are designed to maintain structured and reliable records, ensuring data integrity, compliance, and security. They play a critical role in regulatory reporting, audits, and operational consistency. ... Systems of Engagement (SOEs) are the digital front doors of modern businesses, facilitating seamless and interactive communication with customers and employees. They go beyond simple data storage and retrieval, focusing on creating dynamic and personalized experiences across various channels. SOEs prioritize customer-centric approaches, ensuring businesses can deliver dynamic and interactive communication. ... Systems of Intelligence (SOIs) represent the pinnacle of data-driven decision making. Built upon the foundation of Systems of Record (SORs) and Systems of Engagement (SOEs), SOIs leverage the power of artificial intelligence (AI) and machine learning (ML) to transform raw data into actionable insights. Unlike their predecessors, SOIs go beyond simply identifying patterns and trends. They possess the ability to predict future outcomes and even prescribe optimal courses of action.


Gen AI strategies put CISOs in a stressful bind

One of the most problematic gen AI issues CISOs face is how casual many gen AI vendors are being when selecting the data used to train their models, Townsend said. “That creates a security risk for the organization.” ... generative AI’s penetration into SaaS solutions makes this more problematic. “The attack surface for gen AI has changed. It used to be enterprise users using foundation models provided by the biggest providers. Today, hundreds of SaaS applications have embedded LLMs that are in use across the enterprise,” said Routh, who today serves as chief trust officer at security vendor Saviynt. “Software engineers have more than 1 million open source LLMs at their disposal on HuggingFace.com.” ... All this can take a psychological toll on CISOs, Townsend surmised. “When they feel overwhelmed, they shut down,” he said. “They do what they feel they can, and they will ignore what they feel that they can’t control.” ... “The bad actors are feverishly working to exploit these new technologies in malicious ways, so the CISOs are right to be concerned about how these new gen AI solutions and systems can be exploited,” Taylor said. 


How Enterprises and Startups Can Master AI With Smarter Data Practices

For enterprises, however, supplying AI systems with the data they need to thrive is more complicated by several orders of magnitude. There are two main reasons for this: First, enterprises don’t have the same information aggregation ability in the consumer AI world. Consumer AI companies can use any public data on the web to train their AI models; think of it as an entire continent of information to which they have unfettered access. On the other hand, enterprise data exists within minor, disparate, and oftentimes disconnected information archipelagos. Additionally, enterprises are working with many types of data, including relational data from operational systems, decades of poorly organized folders of documents, and audio and numeric data from payroll and financial systems. Further, enterprises must contend with additional layers of regulatory complexity regarding handling personal and private data. To build impactful AI tools, an enterprise’s algorithms must be fed or trained on specific data sets that span multiple sources, including the company’s human resources, finance, customer relationship management, supply chain management, and other systems.


Yes, you should use AI coding assistants—but not like that

AI is a must for software developers, but not because it removes work. Rather, it changes how developers should work. For those who just entrust their coding to a machine, well, the results are dire. ... Use AI wrong and things get worse, not better. Stanford researcher Yegor Denisov-Blanch notes that his team has found that AI increases both the amount of code delivered and the amount of code that needs reworking, which means that “actual ‘useful delivered code’ doesn’t always increase” with AI. In short, “some people manage to be less productive with AI.” So how do you ensure you get more done with coding assistants, not less? ... Here’s the solution: If you want to use AI coding assistants, don’t use them as an excuse not to learn to code. The robots aren’t going to do it for you. The engineers who will get the most out of AI assistants are those who know software best. They’ll know when to give control to the coding assistant and how to constrain that assistance (perhaps to narrow the scope of the problem they allow it to work on). Less-experienced engineers run the risk of moving fast but then getting stuck or not recognizing the bugs that the AI has created. ... AI can’t replace good programming, because it really doesn’t do good programming.


AI Tools Amplify API Security Threats Worldwide

The financial implications of API breaches prove substantial. According to Kong's report, 55% of organizations experienced an API security incident in the past year. Among those affected, 47% reported remediation costs exceeding $100,000, while 20% faced expenses surpassing $500,000. Gartner's research underscores this urgency, highlighting that API breaches typically result in ten times more leaked data than other types of security incidents. ... While AI technologies, particularly LLMs, drive unprecedented innovation, they introduce new vulnerabilities. These advanced tools enable attackers to exploit shadow APIs, bypass traditional defenses and manipulate API traffic in unexpected ways. The survey indicates that 84% of leaders predict AI and LLMs will increase the complexity of securing APIs over the next two to three years, emphasizing the need for immediate action. Despite 92% of organizations implementing measures to secure their APIs, 40% of leaders remain skeptical about whether their investments will adequately counter AI-driven risks. The regional disparity in preparedness stands out: 13% of U.S. organizations acknowledge taking no specific measures against AI threats, compared to 4% in the U.K.


From AI Assistants to Swarms of Thousands of Collaborating AI Agents: Is Your Architecture Ready?

Agentic AI is likely to create most issues in some areas more than others. The Agentic Architecture Framework identifies seven areas that will require more support in the forms of new or updated frameworks, tools and techniques to support Agentic AI capability-building and architecture development. ... Agentic AI Strategy begins with defining a clear target state across the Agentic AI maturity dimensions and levels. This step establishes the organization’s AI aspirations and provides a benchmark for future transformation. Once the target state is identified, the next step involves conducting a GAP analysis to determine the differences between current capabilities in the previous step, and the organization’s ambition. With these gaps clarified, organizations can then focus on identifying and quantifying high-impact AI use cases that align with business objectives and support progression toward the target state. ... The Agentic AI Operating Model defines how AI systems, people, and processes work together to deliver value. It focuses on integrating AI into the organization’s core operations, ensuring that AI agents operate seamlessly within new and existing workflows and alongside human teams.



Quote for the day:

"No person can be a great leader unless he takes genuine joy in the successes of those under him." -- W. A. Nance