Daily Tech Digest - February 28, 2025


Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel


Microservice Integration Testing a Pain? Try Shadow Testing

Shadow testing is especially useful for microservices with frequent deployments, helping services evolve without breaking dependencies. It validates schema and API changes early, reducing risk before consumer impact. It also assesses performance under real conditions and ensures proper compatibility with third-party services. ... Shadow testing doesn’t replace traditional testing but rather complements it by reducing reliance on fragile integration tests. While unit tests remain essential for validating logic and end-to-end tests catch high-level failures, shadow testing fills the gap of real-world validation without disrupting users. Shadow testing follows a common pattern regardless of environment and has been implemented by tools like Diffy from Twitter/X, which introduced automated-response comparisons to detect discrepancies effectively. ... The environment where shadow testing is performed may vary, providing different benefits. More realistic environments are obviously better:Staging shadow testing — Easier to set up, avoids compliance and data isolation issues, and can use synthetic or anonymized production traffic to validate changes safely. Production shadow testing — Provides the most accurate validation using live traffic but requires safeguards for data handling, compliance and test workload isolation. 


The rising threat of shadow AI

Creating an Office of Responsible AI can play a vital role in a governance model. This office should include representatives from IT, security, legal, compliance, and human resources to ensure that all facets of the organization have input in decision-making regarding AI tools. This collaborative approach can help mitigate the risks associated with shadow AI applications. You want to ensure that employees have secure and sanctioned tools. Don’t forbid AI—teach people how to use it safely. Indeed, the “ban all tools” approach never works; it lowers morale, causes turnover, and may even create legal or HR issues. The call to action is clear: Cloud security administrators must proactively address the shadow AI challenge. This involves auditing current AI usage within the organization and continuously monitoring network traffic and data flows for any signs of unauthorized tool deployment. Yes, we’re creating AI cops. However, don’t think they get to run around and point fingers at people or let your cloud providers point fingers at you. This is one of those problems that can only be solved with a proactive education program aimed at making employees more productive and not afraid of getting fired. Shadow AI is yet another buzzword to track, but also it’s undeniably a growing problem for cloud computing security administrators. 


Can AI live up to its promise?

The debate about truly transformative AI may not be about whether it can think or be conscious like a human, but rather about its ability to perform complex tasks across different domains autonomously and effectively. It is important to recognize that the value and usefulness of machines does not depend on their ability to exactly replicate human thought and cognitive abilities, but rather on their ability to achieve similar or better results through different methods. Although the human brain has inspired much of the development of contemporary AI, it need not be the definitive model for the design of superior AI. Perhaps by freeing the development of AI from strict neural emulation, researchers can explore novel architectures and approaches that optimize different objectives, constraints, and capabilities, potentially overcoming the limitations of human cognition in certain contexts. ... Some human factors that could be stumbling blocks on the road to transformative AI include: the information overload we receive, the possible misalignment with our human values, the possible negative perception we may be acquiring, the view of AI as our competitor, the excessive dependence on human experience, the possible perception of futility of ethics in AI, the loss of trust, overregulation, diluted efforts in research and application, the idea of human obsolescence, or the possibility of an “AI-cracy”, for example.


The end of net neutrality: A wake-up call for a decentralized internet

We live in a time when the true ideals of a free and open internet are under attack. The most recent repeal of net neutrality regulations is taking us toward a more centralized, controlled version of the internet. In this scenario, a decentralized, permissionless internet offers a powerful alternative to today’s reality. Decentralized systems can address the threat of censorship by distributing content across a network of nodes, ensuring that no single entity can block or suppress information. Decentralized physical infrastructure networks (DePIN) demonstrate how decentralized storage can keep data accessible even when network parts are disrupted or taken offline. This censorship resistance is crucial in regions where governments or corporations try to limit free expression online. Decentralization can also cultivate economic democracy by eliminating intermediaries like ISPs and related fees. Blockchain-based platforms allow smaller, newer players to compete with incumbent services and content companies on a level playing field. The Helium network, for example, uses a decentralized model to challenge traditional telecom monopolies with community-driven wireless infrastructure. In a decentralized system, developers don’t need approval from ISPs to launch new services.


Steering by insights: A C-Suite guide to make data work for everyone

With massive volumes of data to make sense of, having reliable and scalable modern data architectures that can organise and store data in a structured, secure, and governed manner while ensuring data reliability and integrity is critical. This is especially true in the hybrid, multi-cloud environment in which companies operate today. Furthermore, as we face a new “AI summer”, executives are experiencing increased pressure to respond to the tsunami of hype around AI and its promise to enhance efficiency and competitive differentiation. This means companies will need to rely on high-quality, verifiable data to implement AI-powered technologies Generative AI and Large Language Models (LLMs) at an enterprise scale. ... Beyond infrastructure, companies in India need to look at ways to create a culture of data. In today’s digital-first organisations, many businesses require real-time analytics to operate efficiently. To enable this, organisations need to create data platforms that are easy to use and equipped with the latest tools and controls so that employees at every level can get their hands on the right data to unlock productivity, saving them valuable time for other strategic priorities. Building a data culture also needs to come from the top; it is imperative to ensure that data is valued and used strategically and consistently to drive decision-making.


The Hidden Cost of Compliance: When Regulations Weaken Security

What might be a bit surprising, however, is one particular pain point that customers in this vertical bring up repeatedly. What is this mysterious pain point? I’m not sure if it has an official name or not, but many people I meet with share with me that they are spending so much time responding to regulatory findings that they hardly have time for anything else. This is troubling to say the least. It may be an uncomfortable discussion to have, but I’d argue that it is long since past the time we as a security community have this discussion. ... The threats enterprises face change and evolve quickly – even rapidly I might say. Regulations often have trouble keeping up with the pace of that change. This means that enterprises are often forced to solve last year’s or even last decade’s problems, rather than the problems that might actually pose a far greater threat to the enterprise. In my opinion, regulatory agencies need to move more quickly to keep pace with the changing threat landscape. ... Regulations are often produced by large, bureaucratic bodies that do not move particularly quickly. This means that if some part of the regulation is ineffective, overly burdensome, impractical, or otherwise needs adjusting, it may take some time before this change happens. In the interim, enterprises have no choice but to comply with something that the regulatory body has already acknowledged needs adjusting.


Why the future of privileged access must include IoT – securing the unseen

The application of PAM to IoT devices brings unique complexities. The vast variety of IoT devices, many of which have been operational for years, often lack built-in security, user interfaces, or associated users. Unlike traditional identity management, which revolves around human credentials, IoT devices rely on keys and certificates, with each device undergoing a complex identity lifecycle over its operational lifespan. Managing these identities across thousands of devices is a resource-intensive task, exacerbated by constrained IT budgets and staff shortages. ... Implementing a PAM solution for IoT involves several steps. Before anything else, organisations need to achieve visibility of their network. Many currently lack this crucial insight, making it difficult to identify vulnerabilities or manage device access effectively. Once this visibility is achieved, organisations must then identify and secure high-risk privileged accounts to prevent them from becoming entry points for attackers. Automated credential management is essential to replace manual password processes, ensuring consistency and reducing oversight. Policies must be enforced to authorise access based on pre-defined rules, guaranteeing secure connections from the outset. Default credentials – a common exploit for attackers – should be updated regularly, and automation can handle this efficiently. 


Understanding the AI Act and its compliance challenges

There is a clear tension between the transparency obligations imposed on providers of certain AI systems under the AI Act and some of their rights and business interests, such as the protection of trade secrets and intellectual property. The EU legislator has expressly recognized this tension, as multiple provisions of the AI Act state that transparency obligations are without prejudice to intellectual property rights. For example, Article 53 of the AI Act, which requires providers of general-purpose AI models to provide certain information to organizations that wish to integrate the model downstream, explicitly calls out the need to observe and protect intellectual property rights and confidential business information or trade secrets. In practice, a good faith effort from all parties will be required to find the appropriate balance between the need for transparency to ensure safe, reliable and trustworthy AI, while protecting the interests of providers that invest significant resources in AI development. ... The AI Act imposes a number of obligations on AI system vendors that will help in-house lawyers in carrying out this diligence. Under Article 13 of the AI Act, vendors of high-risk AI systems are, for example, required to provide sufficient information to (business) deployers to allow them to understand the high-risk AI system’s operation and interpret its output.


Why fast-learning robots are wearing Meta glasses

The technology acts as a sophisticated translator between human and robotic movement. Using mathematical techniques called Gaussian normalization, the system maps the rotations of a human wrist to the precise joint angles of a robot arm, ensuring natural motions get converted into mechanical actions without dangerous exaggerations. This movement translation works alongside a shared visual understanding — both the human demonstrator’s smartglasses and the robot’s cameras feed into the same artificial intelligence program, creating common ground for interpreting objects and environments. ... The EgoMimic researchers didn’t invent the concept of using consumer electronics to train robots. One pioneer in the field, a former healthcare-robot researcher named Dr. Sarah Zhang, has demonstrated 40% improvements in the speed of training healthcare robots using smartphones and digital cameras; they enable nurses to teach robots through gestures, voice commands, and real-time demonstrations instead of complicated programming. This improved robot training is made possible by AI that can learn from fewer examples. A nurse might show a robot how to deliver medications twice, and the robot generalizes the task to handle variations like avoiding obstacles or adjusting schedules. 


Targeted by Ransomware, Middle East Banks Shore Up Security

The financial services industry in UAE — and the Middle East at large — sees cyber wargaming as an important way to identify weaknesses and develop defenses to the latest threats, Jamal Saleh, director general of the UAE Banks Federation, said in a statement announcing the completion of the event. "The rapid adoption and deployment of advanced technologies in the banking and financial sector have increased risks related to transaction security and digital infrastructure," he said in the statement, adding that the sector is increasingly aware "of the importance of such initiatives to enhance cybersecurity systems and ensure a secure and advanced environment for customers, especially with the rapid developments in modern technology and the rise of cybersecurity threats using advanced artificial intelligence (AI) techniques." ... Ransomware remains a major threat to the financial industry, but attackers have shifted from distributed denial-of-service (DDoS) attacks to phishing, data breaches, and identity-focused attacks, according to Shilpi Handa, associate research director for the Middle East, Turkey, and Africa at business intelligence firm IDC. "We see trends such as increased investment in identity and data security, the adoption of integrated security platforms, and a focus on operational technology security in the finance sector," she says. 

Daily Tech Digest - February 27, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



Breach Notification Service Tackles Infostealing Malware

Infostealers can amass massive quantities of credentials. To handle this glut, many cybercriminals create parsers to quickly ingest usernames and passwords for analysis, said Milivoj Rajić, head of threat intelligence at cybersecurity firm DynaRisk. The leaked internal communications of ransomware group Black Basta demonstrated this tactic, he said. Using a shared spreadsheet, the group identified organizations with emails present in infostealer logs, tested which access credentials worked, checked the organization's annual revenue and if its networks were protected by MFA. Using this information helped the ransomware group prioritize its targeting. Another measure of just how much data gets collected by infostealers: the Alien Txtbase records include 244 million passwords not already recorded as breached by Pwned Passwords. Hunt launched that free service in 2017, which anyone can query for free and anonymously, to help users never pick a password that's appeared in a known data breach, shortly after the U.S. National Institute for Standards and Technology began recommending that practice. Not all of the information contained in stealer logs being sold by criminals is necessarily legit. Some of it might be recycled from previous leaks or data dumps. Even so, Hunt said he was able to verify a random sample of the Alien Txtbase corpus with a "handful" of HIBP users he approached.


The critical role of strategic workforce planning in the age of AI

While some companies have successfully deployed strategic workforce planning in the past to reshape their workforces to meet future market requirements, there are also cautionary tales of organizations that have struggled with the transition to new technologies. For instance, the rapid innovation of smartphones left leading players such as Nokia behind. Periods of rapid technological change highlight the importance of predicting and responding to challenges with a dynamic talent planning model. Gen AI is not just another technological advancement affecting specific tasks; it represents a rewiring of how organizations operate and generate value. This transformation goes beyond automation, innovation, and productivity improvements to fundamentally alter the ratio of humans to technology in organizations. By having SWP in place, organizations can react more quickly and intentionally to these changes, monitoring leading and lagging indicators to stay ahead of the curve. This approach allows for identifying and developing new capabilities, ensuring that the workforce is prepared for the evolving demands these changes will bring. SWP gives a fact base to all talent decisions so that trade-offs can be explicitly discussed and strategic decisions can be made holistically—and with enterprise value top of mind. 


Cybersecurity in fintech: Protecting user data and preventing fraud

Fintech companies operate at the intersection of finance and technology, making them particularly vulnerable to cyber threats. These platforms process vast amounts of personal and financial data—from bank account details and credit card numbers to loan records and transaction histories. A single security breach can have devastating consequences, leading to financial losses, regulatory penalties, and reputational damage. Beyond individual risks, fintech platforms are interconnected within a larger financial ecosystem. A vulnerability in one system can cascade across multiple institutions, disrupting transactions, exposing sensitive data, and eroding trust. Given this landscape, cybersecurity in fintech is not just about preventing attacks—it’s about ensuring the integrity of the entire digital financial infrastructure. ... Governments and regulatory bodies worldwide recognise the critical role of cybersecurity in fintech. Frameworks like the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the U.S. set stringent standards for data privacy and security. Compliance is not just a legal necessity—it’s an opportunity for fintech companies to build trust with users. By adhering to global security best practices, fintech firms can differentiate themselves in an increasingly competitive market while ensuring customer data remains protected.


The Smart Entrepreneur's Guide to Thriving in Uncertain Times

If there's one certainty in business, it's change. The most successful entrepreneurs aren't just those who have great ideas — they are the ones who know how to adapt. Whether it's economic downturns, shifts in consumer behavior or emerging competition, the ability to navigate uncertainty is what separates sustainable businesses from those that struggle to survive. ... Instead of long-term strategies that assume stability, use quick experiments to validate new ideas and adjust quickly. When we launched new membership models at our office, we tested different pricing structures and adjusted based on user feedback within weeks rather than months. ... Digital engagement is changing. Entrepreneurs who optimize their messaging based on social media trends and consumer preferences gain a competitive edge. For example, when we noticed an increase in demand for remote work solutions, we adjusted our marketing efforts to highlight our virtual office plans. ... strong company culture that embraces change enables faster adaptation during challenging times. Jim Collins, in Good to Great, emphasizes that having the right people in the right seats is fundamental for long-term success. At Coworking Smart, we focused on hiring individuals who thrived in dynamic environments rather than just filling positions based on traditional job descriptions.


Risk Management for the IT Supply Chain

Who are your mission critical vendors? Do they present significant risks (for example, risk of a merger, or going out of business)? Where are your IT supply chain “weak links” (such as vendors whose products and services repeatedly fail). Are they impairing your ability to provide top-grade IT to the business? What countries do you operate in? Are there technology and support issues that could emerge in those locations? Do you annually send questionnaires to vendors that query them so you can ascertain that they are strong, reliable and trustworthy suppliers? Do you request your auditors periodically review IT supply chain vendors for resiliency, compliance and security? ... Most enterprises include security and compliance checkpoints on their initial dealings with vendors, but few check back with the vendors on a regular basis after the contracts are signed. Security and governance guidelines change from year to year. Have your IT vendors kept up? When was the last time you requested their latest security and governance audit reports from them? Verifying that vendors stay in step with your company’s security and governance requirements should be done annually. ... Although companies include their production supply chains in their corporate risk management plans, they don’t consistently consider the IT supply chain and its risks.


IT infrastructure: Inventory before AIOps

Even if the advantages are clear, the right story is also needed internally to initiate an introduction. Benedikt Ernst from the IBM spin-off Kyndryl sees a certain “shock potential,” especially in the financial dimension, which is ideally anticipated in advance: “The argumentation of costs is crucial because the introduction of AIOps is, of course, an investment in the first instance. Organizations need to ask themselves: How quickly is a problem detected and resolved today? And how does an accelerated resolution affect operating costs and downtime?” In addition, there is another aspect that he believes is too often overlooked: “Ultimately, the introduction of AIOps also reveals potential on the employee side. The fewer manual interventions in the infrastructure are necessary, the more employees can focus on things that really require their attention. For this reason, I see the use of open integration platforms as helpful in making automation and AIOps usable across different platforms.” Storm Reply’s Henckel even sees AIOps as a tool for greater harmony: “The introduction of AIOps also means an end to finger-pointing between departments. With all the different sources of error — database, server, operating system — it used to be difficult to pinpoint the cause of the error. AIOps provides detailed analysis across all areas and brings more harmony to infrastructure evaluation.”


Navigating Supply Chain Risk in AI Chips

The fragmented nature of semiconductor production poses significant challenges for supplier risk management. Beyond the risk posed by delays in delivery or production, which can disrupt operations, such a globalized and complex supply chain poses challenges from a regulatory angle. C chipmakers must take full responsibility for ensuring compliance at every level by thoroughly monitoring and vetting every entity in the supply chain for risks such as forced labor, sanctions violations, bribery, and corruption. ... Many companies are diversifying their supplier base, increasing local procurement efforts, and using predictive modeling to anticipate better demand to address the risk of disruption triggered by delays in delivery or operations. By leveraging advanced data analytics and securing multiple supply routes, businesses can better increase resilience to external shocks and mitigate the risk of supply chain delays. Additionally, firms can incorporate a “value at risk” model into supply chain and operational risk management frameworks. This approach quantifies the financial impact of potential supply chain disruptions, helping chipmakers prioritize the most critical risk areas. ... The AI chip supply chain is a cornerstone of modern innovation, but due to its global and interdependent nature, it is inherently complex. 


Charting the AI-fuelled evolution of embedded analytics

The idea behind embedded analytics is to negate a great deal of the friction around data insights. In theory, line-of-business users have been able to view relevant insights for a long time, by allowing them to import data into the self-service business intelligence (SSBI) tool of their choice. In practice, this disrupts their workflow and interrupts their chain of thought, so a lot of people choose not to make that switch. They’re even less likely to do so if they have to manually export and migrate the data to a different tool. That means they’re missing out on data insights, just when they could be the most valuable for their decisions. Embedded analytics delivers all the charts and insights alongside whatever the user is working on at the time – be it an accounting app, a CRM, a social media management platform or whatever else – which is far more useful. “It’s a lot more intuitive, a lot more functional if it’s in the same place,” says Perez. “Also, generally speaking, the people who use these types of business apps are non-technical, and so the more complicated you make it for them to get to the analysis, the less of it they’ll do.” ... So far, so impressive. But Perez emphasises that there are a number of barriers to embedded analytics utopia. Businesses need to bear these in mind as they seek to develop their own solutions or find providers who can deliver them.


Open source software vulnerabilities found in 86% of codebases

jQuery, a JavaScript library, was the most frequent source of vulnerabilities, as eight of the top 10 high-risk vulnerabilities were found there. Among scanned applications, 43% contained some version of jQuery — oftentimes, an outdated version. An XSS vulnerability affecting outdated versions of jQuery, called CVE-2020-11023, was the most frequently found high-risk vulnerability. McGuire remarks, “There’s also an interesting shift towards web-based and multi-tenant (SaaS) applications, meaning more high-severity vulnerabilities (81% of audited codebases). We also observed an overwhelming majority of high severity vulnerabilities belonging to jQuery. ... McGuire explains, “Embedded software providers are going to be increasingly focused on the quality, safety and reliability of the software they build. Looking at this year’s data, 79% of the codebases were using components whose latest versions had no development activity in the last two years. This means that these dependencies could become less reliable, so industries, like aerospace and medical devices should look to identify these in their own codebases and start moving on from them.” ... “Enterprise regulated organizations are being forced to align with numerous requirements, including providing SBOMs with their applications. If an SBOM isn’t accurate, it’s useless,” McGuire states. 


A 5-step blueprint for cyber resilience

Many claim to practice developer security operations, or DevSecOps, by testing software for security flaws at every stage. At least that's the theory. In reality, developers are under constant pressure to get software into production, and DevSecOps can be an impediment to meeting deadlines. "You hear all these people saying, 'Yes, we're doing DevSecOps,' but the reality is, a lot of people aren't," says Lanowitz. "If you're really focused on being secure by design, you're going to want to do things right from the beginning, meaning you're going to want to have your network architecture correct, your software architecture correct." ... "We have to be able to speak the language of the business," says Lanowitz. "Break down the silos that exist in the organization, get the cyber team and the business team talking, [and] align cybersecurity initiatives with overarching business initiatives." Again, executive leadership needs to point the way, but it often needs convincing. Compliance is a great place to start, because most industries have rules, laws, or insurance providers that mandate a basic level of cybersecurity. ... The more eyes you have on a cybersecurity problem, the more quickly a solution can be found. Because of this, even large companies rely on external managed service providers (MSPs), managed security service providers (MSSPs), managed detection and response (MDR) providers, consultants and advisors.

Daily Tech Digest - February 26, 2025


Quote for the day:

“Happiness is a butterfly, which when pursued, is always beyond your grasp, but which, if you will sit down quietly, may alight upon you.” -- Nathaniel Hawthorne


Deep dive into Agentic AI stack

The Tool / Retrieval Layer forms the backbone of an intelligent agent’s ability to gather, process, and apply knowledge. It enables the agent to retrieve relevant information from diverse data sources, ensuring it has the necessary context to make informed decisions and execute tasks effectively. By integrating various databases, APIs, and knowledge structures, this layer acts as a bridge between raw data and actionable intelligence, equipping the agent with a robust understanding of its environment. ... The Action / Orchestration Layer is a critical component in an intelligent agent’s architecture, responsible for transforming insights and understanding into concrete, executable actions. It serves as the bridge between perception and execution, ensuring that workflows are effectively managed, tasks are executed efficiently, and system interactions remain seamless. This layer must handle the complexity of decision-making, automation, and resource coordination while maintaining adaptability to dynamic conditions. ... The Reasoning Layer is where the agent’s cognitive processes take place, enabling it to analyse data, understand context, draw inferences, and make informed decisions. This layer bridges raw data retrieval and actionable execution by leveraging advanced AI models and structured reasoning techniques. 


AI Hijacked: New Jailbreak Exploits Chain-of-Thought

Several current AI models use chain-of-thought reasoning, an AI technique that helps large language models solve problems by breaking them down into a series of logical steps. The process aims to improve performance and safety by enabling the AI to verify its outputs. But "reasoning" also exposes a new attack surface, allowing adversaries to manipulate the AI's safety mechanisms. A research team comprising experts from Duke University, Accenture and Taiwan's National Tsing Hua University, found a vulnerability in how the models processed and displayed their reasoning. They developed a dataset called Malicious-Educator to test the vulnerability, designing prompts that tricked the models into overriding their built-in safety checks. These adversarial prompts exploited the AI's intermediate reasoning process, which is often displayed in user interfaces. ... The researchers acknowledged that they could be facilitating further jailbreaking attacks by publishing the Malicious-Educator dataset but argued that studying these vulnerabilities openly is necessary to develop stronger AI safety measures. A key distinction in this research is its focus on cloud-based models. AI models running in the cloud often include hidden safety filters that block harmful input prompts and moderate output in real-time. Local models lack these automatic safeguards unless users implement them manually. 


What CISOs need from the board: Mutual respect on expectations

The CISO requires specific and sustained support from the board to effectively protect the organization from cyber threats. A strong partnership between the CISO and board is essential for establishing and maintaining robust cybersecurity practices. My favourite saying one that CISO Robert Veres relayed to me: The board should support the “Red” and challenge the “Green.” This support is exactly what the CISO requires as a foundation. The board must help set the overall strategic direction that aligns with the organization’s risk appetite. This high-level guidance provides the framework within which the CISO can develop and implement security programs. While the CISO establishes the cyber risk culture, they need the board to reinforce this by setting the appropriate tone from the top and ensuring cybersecurity compliance is prioritized across all levels of management and business units. This is a difficult task for some boards as they may lack a good understanding of business and integration of the technology strategy. A critical requirement is for the CISO to have a strong mandate to operate with clear accountability. They need the authority to act and defend the enterprise without excessive interference, allowing them to respond quickly and effectively to emerging threats.


AI-Powered Ransomware Attacks

Consolidating artificial consciousness (simulated intelligence) into cyberattacks is, in a general sense, changing the dangerous scene, creating difficulties for all people and organizations. Generally, digital dangers have been, to a great extent, manual, depending on the inventiveness and flexibility of the aggressor. The idea of these dangers has developed as artificial brainpower has become more computerized, versatile, and practical. AI-based assaults can dissect immense measures of information to recognize weaknesses and send off profoundly designated phishing efforts to spread the most recent malware with negligible human intercession. The speed and execution of computer-based intelligence-fueled assaults imply that dangers can arise more suddenly than at any time in recent memory. For instance, simulated intelligence can mechanize the surveillance and observation stages and guide targets rapidly and precisely. This fast weakness, recognizable proof permits aggressors to take advantage of weaknesses before they are fixed, giving organizations less chance to respond. Additionally, AI can create modified malware that constantly evolves to evade detection using traditional security frameworks, making it more difficult to defend against.


AI Factories: Separating Hype From Reality

While the concept is compelling, will we see this wave of AI factories that Jensen is promising? Probably not at scale. AI hardware is not only costly to acquire and operate, but it also doesn’t run continuously like a database server. Once a model is trained, it may not need updates for months, leaving this expensive infrastructure sitting idle. For that reason, Alan Howard, senior analyst at Omdia specializing in infrastructure and data centers, believes most AI hardware deployments will occur in multipurpose data centers. ... AI tech advances rapidly, and keeping up with the competition is prohibitively expensive, Palaniappan added. “When you start looking at how much each of these GPUs cost, and it gets outdated quite pretty quickly, that becomes bottleneck,” he said. “If you are trying to leverage a data center, you’re always looking for the latest chip in the in the facility, so many of these data centers are losing money because of these efforts.” ... In addition to the cost of the GPUs, significant investment is required for networking hardware, as all the GPUs need to communicate with each other efficiently. Tom Traugott, senior vice president of strategy at EdgeCore Digital Infrastructure, explains that in a typical eight-GPU Nvidia DGX system, the GPUs communicate via NVlink. 


Overcoming Challenges of IT Integration in Cross-Border M&As

When companies agree to combine, things get complicated, particularly when blending their IT and digital operations. To that end, organizations must carefully outline how they plan to merge their IT departments to overcome associated challenges and avoid expensive disruptions. ... IT is the cornerstone of most multinational corporations. Determining how each merger participant will mesh its systems with the other is significant, particularly because 47% of M&A deals fail because of IT problems. IT due diligence is paramount. Not only does the process help identify priorities and risks beforehand, but it also lets the acquiring company properly evaluate the technical capabilities of the firm it intends to purchase. ... Cross-border M&As are subject to data privacy and compliance regulations that vary significantly across jurisdictions. When assessing an international merger, ensure there aren't any non-compliance risks and that the firm being acquired operates legitimately. Be aware of complex international data and privacy laws. Address any irregularities with a strong compliance strategy and retain expert legal counsel before signing the deal. ... In fact, cultural mismatch is one of the top reasons why M&As fail. 


10 machine learning mistakes and how to avoid them

Addressing biases is crucial to success in the modern AI landscape, Swita says. “Best practices include implementing continuous surveillance, alerting mechanisms, and content filtering to help proactively identify and rectify biased content,” he says. “Through these methodologies, organizations can develop AI frameworks that prioritize validated content.” To resolve bias, organizations need to embrace a dynamic approach that includes continually refining systems to keep pace with rapidly evolving models, Swita says. “Strategies need to be meticulously tailored for combating bias,” he says. ... Machine learning comes with certain legal and ethical risks. Legal risks include discrimination due to model bias, data privacy violations, security leaks, and intellectual property violations. These and other risks can have repercussions for developers and users of machine learning systems. Ethical risks include the potential for harm or exploitation, misuse of data, lack of transparency, and lack of accountability. Decisions based on machine learning algorithms can negatively affect individuals, even if that was not the intent. Swita reiterates the need to anchor models and output on trusted, validated, and regulated data. “By adhering to regulations and standards governing data usage and privacy, organizations can reduce the legal and ethical risks associated with machine learning,” he said.


Beyond the Buzz: What 2025's Tech Trends Mean for CIOs

Stemming from the large-scale deployment of AI is the issue of governance. Organizations need to use AI securely, responsibly and with accountability. A DLA Piper survey showed that 96% of firms using AI find governing AI systems a challenge. Some companies are already at the forefront of providing AI governance solutions. For instance, IBM Watsonx provides AI life cycle governance, risk management and regulatory compliance. Cisco AI Defense offers AI visibility, automated vulnerability scanning and real-time protections for AI assets. ... The rise of deepfakes and countless AI-generated misinformation campaigns have made disinformation security a crucial, non-negotiable imperative for enterprises. Although AI-based detection systems and blockchain-backed verification systems are evolving, they still lag behind the sophistication of adversarial tactics, pushing organizations toward adopting robust detection mechanisms and resilience strategies. ... Application of ambient intelligence in healthcare monitoring and for improving customer experience is already in the works. For instance, in early 2024, Texas-based Houston Methodist forged a partnership with Apella, a startup that uses ambient sensor technology and AI to improve surgical processes in operating rooms. 


AI, automation spur efforts to upskill network pros

By developing skills in networking monitoring, performance management, and cost optimization through automation and AI-powered tools, networking pros can become more adept at troubleshooting while offloading repetitive tasks such as copy-pasting configurations. Over time, they can gain the skills to better understand which behaviors and patterns to automate. According to Skillsoft’s Stanger, networking professionals can be challenged in finding the appropriate tasks and workflows to automate. ... “The continuous growth in cloud technologies ensures that cloud computing skills remain in high demand. This includes a thorough understanding of cloud infrastructure and services, which is becoming crucial,” Randstad’s Heins says. “Particularly challenging for companies to find are skills related to cloud service management, especially when combined with AI competencies.” Designing the appropriate network infrastructure, especially for cloud-first and hybrid environments, will be critical for networking pros looking to support sophisticated cloud environments. According to Greg Fuller, chief evangelist and vice president of Skillsoft/Codecademy, cloud computing, in some cases, can lead to complacency in networking as it allows more flexibility to spin up networks quickly.


The future of data security and governance: Why organizations must rethink their strategy

AI is transforming industries, but it’s also introducing new risks. Businesses are racing to integrate AI-powered applications, often without fully understanding the implications for data security and compliance. AI models require vast amounts of data, much of it sensitive, and without proper governance, they can become a significant liability. ... Regulatory bodies worldwide are tightening their grip on data privacy and security. From GDPR and CCPA to emerging AI regulations, the compliance landscape is becoming increasingly complex. Businesses can no longer afford to treat compliance as an afterthought — it must be embedded into every aspect of data management. ... Enterprises now manage data across multiple cloud environments, SaaS applications and third-party vendors. The result? A complex web of data assets, many of which are unprotected and difficult to track. Security teams struggle with: Lack of visibility into where sensitive data resides; Data access governance challenges; Increased vulnerability to cyber threats and insider risks ... The time to act is now. Security, AI, risk and governance leaders must take a proactive approach to data security and governance, ensuring that their organizations are not just reacting to threats but staying ahead of them.

Daily Tech Digest - February 25, 2025


Quote for the day:

"Empowerment is the magic wand that turns a frog into a prince. Never estimate the power of the people, through true empowerment great leaders are born." -- Lama S. Bowen


Service as Software Changes Everything

Service as software, also referred to as SaaS 2.0, goes beyond layering AI atop existing applications. It centers on the concept of automating business processes through intelligent APIs and autonomous services. The framework aims to eliminate human input and involvement through AI agents that act and react to conditions based on events, behavioral changes, and feedback. The result is autonomous software. “Traditional SaaS provides cloud-based tools where staff still do the work. Service as software flips that script. Instead of having staff do the work, you're making calls to an API or using software that does the work for you,” says Mark Strefford, founder of TimelapseAI, a UK-based consulting firm. ... CIOs and IT leaders should start small and iterate, experts say. As an organization gains confidence and trust, it can expand the autonomy of a SaaS 2.0 component. “More AI initiatives have failed from starting too big than too small,” Strefford notes. Consequently, it’s critical to understand the entire workflow, build in oversight and protections, establish measurement and validation tools, and stay focused on outcomes. A few factors can make or break an initiative, Giron says. Data quality and the ability to integrate across systems is crucial. A framework for standardization is critical. This includes cleaning, standardizing, and preparing legacy data. 


The Missing Sustainability Perspective in Cloud Architecture

The Well-Architected Framework provides a structured approach to making architectural decisions. While it originally focused on operational, security, and financial trade-offs, the Sustainability Pillar introduces specific guidance for designing cloud solutions with minimal environmental impact. One key architectural trade-off is between performance efficiency and sustainability. While performance efficiency emphasizes speed and low latency, these benefits often come at the cost of over-provisioning resources. A more sustainable approach involves optimizing compute resources to ensure they are only consumed when necessary. Serverless computing solutions, such as AWS Lambda or Azure Functions, help minimize idle capacity by executing workloads only when triggered. Similarly, auto-scaling for containerized applications, such as Kubernetes Horizontal Pod Autoscaler (HPA) or AWS Fargate, ensures that resources are dynamically adjusted based on demand, preventing unnecessary energy consumption. Another critical balance is between cost optimization and sustainability. Traditional cost optimization strategies focus on reducing expenses, but without considering sustainability, businesses might make short-term cost-saving decisions that lead to long-term environmental inefficiencies. For example, many organizations store large volumes of data without assessing its relevance, leading to excessive storage-related energy use.


Quantum Computing Has Arrived; We Need To Prepare For Its Impact

Many now believe that the power and speed of quantum computing will enable us to address some of the biggest and most difficult problems our civilization faces. Problem-solving will be made possible by quantum computing’s unprecedented processing speed and predictive analytics. That is a remarkable near-term potential. Mckinsey & Company forecasts that Quantum Technologies could create an economic value in the market of up to $2 trillion by 2035. Quantum measuring and sensing is one field where quantum technologies have already made their appearance. Navigational devices and magnetic resonance imaging already employ it. Quantum sensors detect and quantify minute changes in time, gravity, temperature, pressure, rotation, acceleration, frequency, and magnetic and electric fields using the smallest amounts of matter and energy. Quantum will have a direct impact on many scientific fields, including biology, chemistry, physics, and mathematics. Industry applications will have an impact on a wide range of fields, including healthcare, banking, communications, commerce, cybersecurity, energy, and space exploration. In other words, any sector in which data is a component. More specifically, quantum technology has incredible potential to transform a wide range of fields, including materials science, lasers, biotechnology, communications, genetic sequencing, and real-time data analytics.


Industrial System Cyberattacks Surge as OT Stays Vulnerable

"There's a higher propensity for manufacturing organizations to have cloud connectivity just as a way of doing business, because of the benefits of the public cloud for manufacturing, like for predictive analytics, just-in-time inventory management, and things along those lines," he says, pointing to Transportation Security Administration rules governing pipelines and logistics networks as one reason for the difference. "There is purposeful regulation to separate the IT-OT boundary — you tend to see multiple kinds of ring-fence layers of controls. ... There's a more conservative approach to outside-the-plant connectivity within logistics and transportation and natural resources," Geyer says. ... When it comes to cyber-defense, companies with operational technology should focus on protecting their most important functions, and that can vary by organization. One food-and-beverage company, for example, focuses on the most important production zones in the company, testing for weak and default passwords, checking for the existence of clear-text communications, and scanning for hard-coded credentials, says Claroty's Geyer. "The most important zone in each of their plants is milk receiving — if milk receiving fails, everything else is critical path and nothing can work throughout the plant," he says. 


How to create an effective incident response plan

“When you talk about BIA and RTOs [recovery time objective], you shouldn’t be just checking boxes,” Ennamli says. “You’re creating a map that shows you, and your decision-makers, exactly where to focus efforts when things go wrong. Basically, the nervous system of your business.” ... “And when the rubber hits the road during an actual incident, precious time is wasted on less important assets while critical business functions remain offline and not bringing in revenue,” he says. ... It’s vital to have robust communication protocols, says Jason Wingate, CEO at Emerald Ocean, a provider of brand development services. “You’re going to want a clear chain of command and communication,” he says. “Without established protocols, you’re about as effective as trying to coordinate a fire response with smoke signals.” The severity of the incident should inform the communications strategy, says David Taylor, a managing director at global consulting firm Protiviti. While cybersecurity team members actively responding to an incident will be in close contact and collaborating during an event, he says, others are likely not as plugged in or consistently informed. “Based on the assigned severity, stemming from the initial triage or a change to the level of severity based on new information during the response, governance should dictate the type, audience, and cadence of communications,” Taylor says.


AI-Powered DevOps: Transforming CI/CD Pipelines for Intelligent Automation

Traditional software testing faces challenges as organizations must assess codes to ensure they do not downgrade system performance or introduce bugs. Applications with extensive functionalities are time-consuming as they demand several test cases. They must ensure appropriate management, detailing their needs and advancing critical results in every scope. Nonetheless, smoke and regression testing ensures the same test cases are conducted, leading to time-consuming activities. The difficulty makes it hard for the traditional approach to have critical coverage of what is needed, and it is challenging to ensure that every approach can be tackled appropriately, channeling value toward the demanded selection. ... Using ML-driven test automation leads to increased efficiency in managing repetitive tasks. These automated measures ensure an accelerated testing approach, allowing teams to work with better activities. ML also integrates quality assessment into the software, marking an increasingly beneficial way to attend to individual requirements to ensure every software is assessed for high risk, potential failures and critical functions, which achieve a better post-deployment result. Additionally, using ML automation leads to cost savings, enabling testing cycles to have minimal operational costs as they are automated and prevent defects from being deployed within the software. 


Prompt Engineering: Challenges, Strengths, and Its Place in Software Development's Future

Prompt engineering and programming share the goal of instructing machines but differ fundamentally in their methodologies. While programming relies on formalized syntax, deterministic execution, and precision to ensure consistency and reliability, prompt engineering leverages the adaptability of natural language. This flexibility, however, introduces certain challenges, such as ambiguity, variability, and unpredictability. ... Mastering prompt engineering requires a level of knowledge and expertise comparable to programming. While it leverages natural language, its effective use demands a deep understanding of AI model behavior, the application of specific techniques, and a commitment to continuous learning. Similar to programming, prompt engineering involves continual learning to stay proficient with a variety of evolving techniques. A recent literature review by OpenAI and Microsoft analyzed over 1,500 prompt engineering-related papers, categorizing the various strategies into a formal taxonomy. This literature review is indicative of the continuous evolution of prompt engineering, requiring practitioners to stay informed and refine their approaches to remain effective.


Avoiding vendor lock-in when using managed cloud security services

An ideal managed cloud security provider should take an agnostic approach. Their solution should be compatible with whatever CNAPP or CSPM solution you use. This gives you maximum flexibility to find the right provider without locking yourself into a specific solution. Advanced services may even enable you to take open-sourced tooling and get to a good place before expanding to a full cloud security solution. You could also partner with a managed cloud security service that leverages open standards and protocols. This approach will allow you to integrate new or additional vendors while reducing your dependency on proprietary technology. Training and building in-house knowledge also helps. A confident service won’t keep their knowledge to themselves and helps enable and provide training to your team along the way. ... And there’s IAM—a more complex but equally concerning component of cloud security. In recent news, a few breaches started with low-level credentials being obtained before the attackers self-escalated themselves to gain access to sensitive information. This is often due to overly permissive access given to humans and machines. It’s also one of the least understood components of the cloud. Still, if your managed cloud security service truly understands the cloud, it won’t ignore IAM, the foundation of cloud security.


Observability Can Get Expensive. Here’s How to Trim Costs

“At its core, the ‘store it all’ approach is meant to ensure that when something goes wrong, teams have access to everything so they can pinpoint the exact location of the failure in their infrastructure,” she said. “However, this has become increasingly infeasible as infrastructure becomes more complex and ephemeral; there is now just too much to collect without massive expense.” ... “Something that would otherwise take developers weeks to do — take an inventory of all telemetry collected and eliminate the lower value parts — can be available at the click of a button,” she said. A proper observability platform can continually analyze telemetry data in order to have the most up-to-date picture of what is useful rather than a one-time, manual audit “that’s essentially stale as soon as it gets done,” Villa said. “It’s less about organizations wanting to pay less for observability tools, but they’re thinking more long-term about their investment and choosing platforms that will save them down the line,” she said. “The more they save on data collection, the more they can reinvest into other areas of observability, including new signals like profiling that they might not have explored yet.” Moving from a “store it all” to a “store intelligently” strategy is not only the future of cost optimization, Villa said, but can also help make the haystack of data smaller 


The Aftermath of a Data Breach

For organizations, the aftermath of a data breach can be highly devastating. In an interconnected world, a single data vulnerability can cascade into decades of irreversible loss – intellectual, monetary, and reputational. The consequences paralyze even the most established businesses, uprooting them from their foundation. ... The severity of a data breach often depends on how long it goes undetected; however, identifying the breach is where the story actually begins. From containing the destruction and informing authorities to answering customers and paying for their damages, the road to business recovery is long and grueling. ... Organizations must create, implement, and integrate a data management policy in their organizational setup that provides a robust framework for managing data throughout its entire lifecycle, from creation to disposal. This policy should also include a data destruction policy that specifies data destruction methods, data wiping tools, type of erasure verification, and records of data destruction. It should further cover media control and sanitization, incident reporting, the roles and responsibilities of the CIO, CISO, and privacy officer. Using a professional software-based data destruction tool erases data permanently from IT assets including laptops, PCs, and Mac devices. 

Daily Tech Digest - February 24, 2025


Quote for the day:

"A tough hide with a tender heart is a goal that all leaders must have." -- Wayde Goodall


A smarter approach to training AI models

AI models are beginning to hit the limits of compute. Model size is far outpacing Moore’s Law and the advances in AI training chips. Training runs for large models can cost tens of millions of dollars due to the cost of chips. This issue has been acknowledged by prominent AI engineers including Ilya Sutskever. The costs have become so high that Anthropic has estimated that it could cost as much to update Claude as it did to develop it in the first place. Companies like Amazon are spending billions to erect new AI data centers in an effort to keep up with the demands of building new frontier models. ... With a better foundational understanding of how AI works, we can approach AI model training and deployment in new ways that require a fraction of the energy and compute, bringing the rigor of other sciences to AI with a principles-first approach. ... By eschewing the inefficiencies and less theoretically justified parts of deep learning, we create a path forward to the next generation of truly intelligent AI, that we’ve seen surpasses the wall deep learning has hit. We have to understand how learning works and build models with interpretability and efficiency in mind from the ground up, especially as high-risk applications of AI in sectors like finance and healthcare demand more than the nondeterministic behavior we’ve become accustomed to. 


Strategic? Functional? Tactical? Which type of CISO are you?

Various factors influence what type of CISO a company may need, says Patton, a former CISO now working as a cybersecurity executive advisor at Cisco. A large, older company with a big, complicated tech stack will need someone with different skills, experience, and leadership qualities than a cloud-native startup that’s rapidly growing and changing. A heavily regulated industry such as financial services, healthcare, or utilities needs someone steeped in how to navigate all the compliance requirements. ... The path professionals take to the CISO seat also influences what type or types of CISOs they tend to be, adds Matt Stamper, CEO, CISO, and executive advisor with Executive Advisors Group as well as a board member with the ISACA San Diego chapter. Different career paths forge different types of executives, he says. Those who advanced through technical roles typically retain a technology bent, while those who came up through governance and risk functions usually gravitate toward compliance-focused roles. ... “CISOs should and tend to lean into where they’re gifted,” says Jenai Marinkovic, vCISO and CTO with Tiro Security and a member of the Emerging Trends Working Group with the IT governance association ISACA.


Becoming Ransomware Ready: Why Continuous Validation Is Your Best Defense

With the nature of IOCs being subtle and intentionally difficult to detect, how do you know that your XDR is effectively knipping them all in the bud? You hope that it is, but security leaders are using continuous ransomware validation to get a lot more certainty than that. By safely emulating the full ransomware kill chain - from initial access and privilege escalation to encryption attempts - tools like Pentera validate whether security controls, including EDR and XDR solutions, trigger the necessary alerts and responses. If key IOCs like shadow copy deletion, and process injection go undetected, then that's a crucial flag to prompt security teams to fine-tune detection rules and response workflows. ... Here's the reality: testing your defenses once a year leaves you exposed the other 364 days. Ransomware is constantly evolving, and so are the Indicators of Compromise (IOCs) used in attacks. Can you say with certainty that your EDR is detecting every IOC it should? The last thing you need to stress about is how threats are constantly changing into something your security tools will fail to recognize and aren't prepared to handle. That's why continuous ransomware validation is essential. With an automated process, you can continuously test your defenses to ensure they stand up against the latest threats.


US intensifies scrutiny of the EU’s Digital Markets Act

The DMA introduced unprecedented restrictions and requirements for companies designated as “gatekeepers” in the digital market. These companies must comply with a strict set of rules designed to prevent unfair business practices and ensure market accessibility for smaller competitors. The Act mandates various requirements including interoperability for core platform services, restrictions on personal data combination across services, and prohibition of self-preferencing practices in rankings and search results. “Big tech’s designated platforms can no longer unfairly promote their own products or services above yours (EU-based companies) in search results or ads,” one of the clauses of the DMA says pertaining to offering level playing. ... Meanwhile, the European Commission — where Ribera serves as the second-highest ranking official under President Ursula von der Leyen — maintains that these regulations are not targeted at US companies, according to the report. The Commission argued that the DMA is designed to ensure fair competition and consumer choice in digital markets, regardless of companies’ national origin. However, the predominance of US firms among those affected has intensified transatlantic tensions over digital policy.


The Technology Blueprint for CIOs: Expectations and Concerns

"Security sits at the front and center of business innovations, especially in sectors like banking and finance, where protecting user data and privacy is paramount. Every sector has its own unique challenges and opportunities, making a sector-driven approach essential," said Sachin Tayal, managing director at Protiviti member firm for India. AI-powered fraud detection systems are now integral, using behavior biometrics and facial recognition to detect and mitigate threats such as UPI frauds. Decentralized finance is also gaining traction, with blockchain-based solutions modernizing core banking functions and facilitating secure, transparent digital transactions, the report found. ... The industrial manufacturing sector is embracing Industry 4.0, characterized by the convergence of AI, IoT and cloud technologies. The market is seeing a shift toward digital twins and real-time analytics to optimize production processes. The integration of autonomous mobile robots and collaborative robots, cobots, is enhancing efficiency and safety on the production floor, the report said. ... CIOs have their work cut out - innovate or risk getting redundant. "Technology is driving businesses today, and the transformative role of the CIO amid the rapid rise of AI and digital innovations has never been more critical. The CIO now wears many hats - CTO, CISO and even CEO - as roles evolve to meet the demands of a digital-first world," Gupta said.


Man vs. machine: Striking the perfect balance in threat intelligence

One of the key things you must be aware of is your unconscious biases. Because we all have them. But being able to understand that and implement practices that challenge your assumptions, analysis and hypotheses is key to providing the best intelligence product. I think it’s a fascinating problem, particularly as it’s not necessarily something a SOC analyst or a vulnerability manager may consider, because it’s not really a part of their job to think that way, right? Fortunately, when it comes to working with the AI data, we can apply things like system prompts, we can be explicit in what we want to see as the output, and we can ask it to demonstrate where and why findings are identified, and their possible impact. Alongside that, I think the question also demonstrates the importance on why we as humans can’t forego things like training or maintaining skills. ... It’s also important that security continues to be a business enabler. There are times we interact with websites in countries that may have questionable points of view or human rights records. Does the AI block those countries because the training data indicates it shouldn’t support or provide access? Now some organisations will do domain blocking to an extreme level and require processes and approvals to access a website, it’s archaic and ridiculous in my opinion. Can AI help in that space? Almost certainly. 


AI and the Future of Software Testing: Will Human Testers Become Obsolete?

With generative AI tools, it has become possible to produce software testing code automatically. QA engineers can simply describe what they want to test and specify a testing framework, tool, or language, then let generative AI do the tedious work of writing out the code. Test engineers often need to validate and tweak the AI-generated code, just as software developers most often rework some parts of application code produced by AI. But by writing unit tests and other software tests automatically, AI can dramatically reduce the time that QA engineers spend creating tests. ... AI tools can also assist in evaluating test results. This is important because, in the past, a test failure typically meant that a QA engineer had to sit down with developers, figure out why the test failed, and formulate a plan for fixing whichever flaw triggered the issue. AI can automate this process in many cases by evaluating test results and corresponding application code and then making recommendations about how to fix an issue. Although it's not realistic to expect AI to be capable of entirely automating all software test assessments, it can do much of the tedious work. ... At the same time, though, AI will almost certainly reduce the need for human software testers, which could lead to some job losses in this area. 


From Convenience to Vulnerability: The Dual Role of APIs in Modern Services

Recently, a non-exploited vulnerability was discovered within a popular Travel Service that could have enabled attackers to take over victim accounts with a single click. Such an attack is called an "API Supply Chain Attack," in which an attacker chooses to attack a weaker link in the service's API ecosystem. While the takeover could occur within the integrated service, it likely would have provided attackers full access to the user's personally identifiable information (PII) from the main account, including all mileage and rewards data. Beyond mere data exposure, attackers could perform actions on behalf of the user, such as creating orders or modifying account details. This critical risk highlights the vulnerabilities in third-party integrations and the importance of stringent security protocols to protect users from unauthorized account access and manipulation. Vigilance, governance, and explicit control of APIs are essential for safeguarding against security gaps and vulnerabilities within API ecosystems. Organizations must prioritize investing in comprehensive API tools and software that support the entire API lifecycle. This includes identifying and cataloging all APIs in use to ensure visibility and control, continuously assessing and improving the security posture of APIs to mitigate risks, and implementing robust security measures to detect and respond to potential threats targeting APIs. 


Scientists Tested AI For Cognitive Decline. The Results Were a Shock.

Today, the famous large language model (LLM) is just one of several leading programs that appear convincingly human in their responses to basic queries. That uncanny resemblance may extend further than intended, with researchers from Israel now finding LLMs suffer a form of cognitive impairment similar to decline in humans, one that is more severe among earlier models. The team applied a battery of cognitive assessments to publicly available 'chatbots': versions 4 and 4o of ChatGPT, two versions of Alphabet's Gemini, and version 3.5 of Anthropic's Claude. Were the LLMs truly intelligent, the results would be concerning. In their published paper, neurologists Roy Dayan and Benjamin Uliel from Hadassah Medical Center and Gal Koplewitz, a data scientist at Tel Aviv University, describe a level of "cognitive decline that seems comparable to neurodegenerative processes in the human brain." For all of their personality, LLMs have more in common with the predictive text on your phone than the principles that generate knowledge using the squishy grey matter inside our heads. What this statistical approach to text and image generation gains in speed and personability, it loses in gullibility, building code according to algorithms that struggle to sort meaningful snippets of text from fiction and nonsense.


6 reasons so many IT orgs fail to exceed expectations today

“CIOs at large organizations know what they’ve got to hit. They know what they have to do to exceed expectations. But it’s more common that CIOs at smaller and less mature organizations have unclear objectives,” says Mark Taylor, CEO of the Society for Information Management (SIM). ... Doing all that work around expectation setting may still not be enough, as CIOs frequently find that the expectations set for them and their teams can shift suddenly. “Those moving targets happen all the time, especially when it comes to innovation,” says Peter Kreutter, WHU Otto Beisheim School of Management’s CIO Leadership Excellence Program faculty director and a member of the board of trustees for CIO Stiftung. ... “Fundamental challenges, such as legacy technology infrastructure and rigid operating cost structures, were at the core of failure rates,” the report reads. “These frequently limited the effectiveness of margin improvement initiatives and their impact on the bottom line. Unfortunately, this may only get worse, with uncertainty as a constant and the push for gen AI and data across enterprises.” ... Confusion about accountability — that is, who is really accountable for what results — is another obstacle for CIOs and IT teams as they aim high, according to Swartz.