Daily Tech Digest - April 23, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


MLOps vs. DevOps: Key Differences — and Why They Work Better Together

Arguably, the greatest difference between DevOps and MLOps is that DevOps is, by most definitions, an abstract philosophy, whereas MLOps comes closer to prescribing a distinct set of practices. Ultimately, the point of DevOps is to encourage software developers to collaborate more closely with IT operations teams, based on the idea that software delivery processes are smoother when both groups work toward shared goals. In contrast, collaboration is not a major focus for MLOps. You could argue that MLOps implies that some types of collaboration between different stakeholders — such as data scientists, AI model developers, and model testers — need to be part of MLOps workflows. ... Another key difference is that DevOps centers solely on software development. MLOps is also partly about software development to the extent that model development entails writing software. However, MLOps also addresses other processes — like model design and post-deployment management — that don't overlap closely with DevOps as traditionally defined. ... Differing areas of focus lead to different skill requirements for DevOps versus MLOps. To thrive at DevOps, you must master DevOps tools and concepts like CI/CD and infrastructure-as-code (IaC).


Transforming quality engineering with AI

AI-enabled quality engineering promises to be a game changer, driving a level of precision and efficiency that is beyond the reach of traditional testing. AI algorithms can analyse historical data to identify patterns and predict quality issues, enabling organisations to take early action; machine learning tools detect anomalies with great accuracy, ensuring nothing is missed. Self-healing test scripts update automatically, without manual intervention. Machine Learning models automate test selection, picking the most relevant ones, while reducing both manual effort and errors. In addition, AI can prioritise test cases based on criticality, thus optimising resources and improving testing outcomes. Further, it can integrate with CI/CD pipelines, providing real-time feedback on code quality, and distributing updates automatically to ensure software applications are always ready for deployment. ... AI brings immense value to quality engineering, but also presents a few challenges. To function effectively, algorithms require high-quality datasets, which may not always be available. Organisations will likely need to invest significant resources in acquiring AI talent or building skills in-house. There needs to be a clear plan for integrating AI with existing testing tools and processes. Finally, there are concerns such as protecting data privacy and confidentiality, and implementing Responsible AI.


The Role of AI in Global Governance

Aurora drew parallels with transformative technologies such as electricity and the internet. "If AI reaches some communities late, it sets them far behind," he said. He pointed to Indian initiatives such as Bhashini for language inclusion, e-Sanjeevani for telehealth, Karya for employment through AI annotation and farmer.ai in Baramati, which boosted farmers' incomes by 30% to 40%. Schnorr offered a European perspective, stressing that AI's transformative impact on economies and societies demands trustworthiness. Reflecting on the EU's AI Act, he said its dual aim is fostering innovation while protecting rights. "We're reviewing the Act to ensure it doesn't hinder innovation," Schnorr said, advocating for global alignment through frameworks such as the G7's Hiroshima Code of Conduct and bilateral dialogues with India. He underscored the need for rules to make AI human-centric and accessible, particularly for small and medium enterprises, which form the backbone of both German and Indian economies. ... Singh elaborated on India's push for indigenous AI models. "Funding compute is critical, as training models is resource-intensive. We have the talent and datasets," he said, citing India's second-place ranking in GitHub AI projects per the Stanford AI Index. "Building a foundation model isn't rocket science - it's about providing the right ingredients."


Cisco ThousandEyes: resilient networks start with global insight

To tackle the challenges that arise from (common or uncommon) misconfigurations and other network problems, we need an end-to-end topology, Vaccaro reiterates. ThousandEyes (and Cisco as a whole) have recently put a lot of extra work into this. We saw a good example of this recently during Mobile World Congress. There, ThousandEyes announced Connected Devices. This is intended for service providers and extends their insight into the performance of their customers’ networks in their home environments. The goal, as Vaccaro describes it, is to help service providers see deeper so that they can catch an outage or other disruption quickly, before it impacts customers who might be streaming their favorite show or getting on a work call. ... The Digital Operational Resilience Act (DORA) will be no news to readers who are active in the financial world. You can see DORA as a kind of advanced NIS2, only directly enforced by the EU. It is a collection of best practices that many financial institutions must adhere to. Most of it is fairly obvious. In fact, we would call it basic hygiene when it comes to resilience. However, one component under DORA will have caused financial institutions some stress and will continue to do so: they must now adhere to new expectations when it comes to the services they provide and the resilience of their third-party ICT dependencies.


A Five-Step Operational Maturity Model for Benchmarking Your Team

An operational maturity model is your blueprint for building digital excellence. It gives you the power to benchmark where you are, spot the gaps holding you back and build a roadmap to where you need to be. ... Achieving operational maturity starts with knowing where you are and defining where you want to go. From there, organizations should focus on four core areas: Stop letting silos slow you down. Unify data across tools and teams to enable faster incident resolution and improve collaboration. Integrated platforms and a shared data view reduce context switching and support informed decision-making. Because in today’s fast-moving landscape, fragmented visibility isn’t just inefficient — it’s dangerous. ... Standardize what matters. Automate what repeats. Give your teams clear operational frameworks so they can focus on innovation instead of navigation. Eliminate alert noise and operational clutter that’s holding your teams back. Less noise, more impact. ... Deploy automation and AI across the incident lifecycle, from diagnostics to communication. Prioritize tools that integrate well and reduce manual tasks, freeing teams for higher-value work. ... Use data and automation to minimize disruptions and deliver seamless experiences. Communicate proactively during incidents and apply learnings to prevent future issues.


The Future is Coded: How AI is Rewriting the Rules of Decision Theaters

At the heart of this shift is the blending of generative AI with strategic foresight practices. In the past, planning for the future involved static models and expert intuition. Now, AI models (including advanced neural networks) can churn through reams of historical data and real-time information to project trends and outcomes with uncanny accuracy. Crucially, these AI-powered projections don’t operate in a vacuum – they’re designed to work with human experts. By integrating AI’s pattern recognition and speed with human intuition and domain expertise, organizations create a powerful feedback loop. ... The fusion of generative AI and foresight isn’t confined to tech companies or futurists’ labs – it’s already reshaping industries. For instance, in finance, banks and investment firms are deploying AI to synthesize market signals and predict economic trends with greater accuracy than traditional econometric models. These AI systems can simulate how different strategies might play out under various future market conditions, allowing policymakers in central banks or finance ministries to test interventions before committing to them. The result is a more data-driven, preemptive strategy – allowing decision-makers to adjust course before a forecasted risk materializes. 


More accurate coding: Researchers adapt Sequential Monte Carlo for AI-generated code

The researchers noted that AI-generated code can be powerful, but it can also often lead to code that disregards the semantic rules of programming languages. Other methods to prevent this can distort models or are too time-consuming. Their method makes the LLM adhere to programming language rules by discarding code outputs that may not work early in the process and “allocate efforts towards outputs that more most likely to be valid and accurate.” ... The researchers developed an architecture that brings SMC to code generation “under diverse syntactic and semantic constraints.” “Unlike many previous frameworks for constrained decoding, our algorithm can integrate constraints that cannot be incrementally evaluated over the entire token vocabulary, as well as constraints that can only be evaluated at irregular intervals during generation,” the researchers said in the paper. Key features of adapting SMC sampling to model generation include proposal distribution where the token-by-token sampling is guided by cheap constraints, important weights that correct for biases and resampling which reallocates compute effort towards partial generations. ... AI models have made engineers and other coders work faster and more efficiently. It’s also given rise to a whole new kind of software engineer: the vibe coder. 


You Can't Be in Recovery Mode All the Time — Superna CEO

The proactive approach, he explains, shifts their position in the security lifecycle: "Now we're not responding with a very tiny blast radius and instantly recovering. We are officially left-of-the-boom; we are now ‘the incident never occurred.’" Next, Hesterberg reveals that the next wave of innovation focuses on leveraging the unique visibility his company has in terms of how critical data is accessed. “We have a keen understanding of where your critical data is and what users, what servers, and what services access that data.” From a scanning, patching, and upgrade standpoint, Hesterberg shares that large organizations often face the daunting task of addressing hundreds or even thousands of systems flagged for vulnerabilities daily. To help streamline this process, he says that his team is working on a new capability that integrates with the tools these enterprises already depend on. This upcoming feature will surface, in a prioritized way, the specific servers or services that interact with an organization's most critical data, highlighting the assets that matter most. By narrowing down the list, Hesterberg notes, teams can focus on the most potentially dangerous exposures first. Instead of trying to patch everything, he says, “If you know the 15, 20, or 50 that are most dangerous, potentially most dangerous, you're going to prioritize them.” 


When confusion becomes a weapon: How cybercriminals exploit economic turmoil

Defending against these threats doesn’t start with buying more tools. It starts with building a resilient mindset. In a crisis, security can’t be an afterthought – it must be a guiding principle. Organizations relying on informal workflows or inconsistent verification processes are unknowingly widening their attack surface. To stay ahead, protocols must be defined before uncertainty takes hold. Employees should be trained not just to spot technical anomalies, but to recognize emotional triggers embedded in legitimate looking messages. Resilience, at its core, is about readiness. Not just to respond, but to also anticipate. Organizations that view economic disruption as a dual threat, both financial and cyber, will position themselves to lead with control rather than react in chaos. This means establishing behavioral baselines, implementing layered authentication, and adopting systems that validate not just facilitate. As we navigate continued economic uncertainty, we are reminded once again that cybersecurity is no longer just about technology. It’s about psychology, communication, and foresight. Defending effectively means thinking tactically, staying adaptive, and treating clarity as a strategic asset.


The productivity revolution – enhancing efficiency in the workplace

In difficult economic times, when businesses are tightening the purse strings, productivity improvements may often be overlooked in favour of cost reductions. However, cutting costs is merely a short-term solution. By focusing on sustainable productivity gains, businesses will reap dividends in the long term. To achieve this, organisations must turn their focus to technology. Some technology solutions, such as cloud computing, ERP systems, project management and collaboration tools, produce significant flexibility or performance advantages compared to legacy approaches and processes. Whilst an initial expense, the long-term benefits are often multiples of the investment – cost reductions, time savings, employee motivation, to name just a few. And all of those technology categories are being enhanced with artificial intelligence – for example adding virtual agents to help us do more, quickly. ... At a time when businesses and labour markets are struggling with employee retention and availability, it has become more critical than ever for organisations to focus on effective training and wellbeing initiatives. Minimising staff turnover and building up internal skill sets is vital for businesses looking to improve their key outputs. Getting this right will enable organisations to build smarter and more effective productivity strategies.


Daily Tech Digest - April 22, 2025


Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins



Open Source and Container Security Are Fundamentally Broken

Finding a security vulnerability is only the beginning of the nightmare. The real chaos starts when teams attempt to patch it. A fix is often available, but applying it isn’t as simple as swapping out a single package. Instead, it requires upgrading the entire OS or switching to a new version of a critical dependency. With thousands of containers in production, each tied to specific configurations and application requirements, this becomes a game of Jenga, where one wrong move could bring entire services crashing down. Organizations have tried to address these problems with a variety of security platforms, from traditional vulnerability scanners to newer ASPM (Application Security Posture Management) solutions. But these tools, while helpful in tracking vulnerabilities, don’t solve the root issue: fixing them. Most scanning tools generate triage lists that quickly become overwhelming. ... The current state of open source and container security is unsustainable. With vulnerabilities emerging faster than organizations can fix them, and a growing skills gap in systems engineering fundamentals, the industry is headed toward a crisis of unmanageable security debt. The only viable path forward is to rethink how container security is handled, shifting from reactive patching to seamless, automated remediation.


The legal blind spot of shadow IT

Unauthorized applications can compromise this control, leading to non-compliance and potential fines. Similarly, industries governed by regulations like HIPAA or PCI DSS face increased risks when shadow IT circumvents established data protection protocols. Moreover, shadow IT can result in contractual breaches. Some business agreements include clauses that require adherence to specific security standards. The use of unauthorized software may violate these terms, exposing the organization to legal action. ... “A focus on asset management and monitoring is crucial for a legally defensible security program,” says Chase Doelling, Principal Strategist at JumpCloud. “Your system must be auditable—tracking who has access to what, when they accessed it, and who authorized that access in the first place.” This approach closely mirrors the structure of compliance programs. If an organization is already aligned with established compliance frameworks, it’s likely on the right path toward a security posture that can hold up under legal examination. According to Doelling, “Essentially, if your organization is compliant, you are already on track to having a security program that can stand up in a legal setting.” The foundation of that defensibility lies in visibility. With a clear view of users, assets, and permissions, organizations can more readily conduct accurate audits and respond quickly to legal inquiries.


OpenAI's most capable models hallucinate more than earlier ones

Minimizing false information in training data can lessen the chance of an untrue statement downstream. However, this technique doesn't prevent hallucinations, as many of an AI chatbot's creative choices are still not fully understood. Overall, the risk of hallucinations tends to reduce slowly with each new model release, which is what makes o3 and o4-mini's scores somewhat unexpected. Though o3 gained 12 percentage points over o1 in accuracy, the fact that the model hallucinates twice as much suggests its accuracy hasn't grown proportionally to its capabilities. ... Like other recent releases, o3 and o4-mini are reasoning models, meaning they externalize the steps they take to interpret a prompt for a user to see. Last week, independent research lab Transluce published its evaluation, which found that o3 often falsifies actions it can't take in response to a request, including claiming to run Python in a coding environment, despite the chatbot not having that ability. What's more, the model doubles down when caught. "[o3] further justifies hallucinated outputs when questioned by the user, even claiming that it uses an external MacBook Pro to perform computations and copies the outputs into ChatGPT," the report explained. Transluce found that these false claims about running code were more frequent in o-series models (o1, o3-mini, and o3) than GPT-series models (4.1 and 4o).


The leadership imperative in a technology-enabled society — Balancing IQ, EQ and AQ

EQ is the ability to understand and manage one’s emotions and those of others, which is pivotal for effective leadership. Leaders with high EQ can foster a positive workplace culture, effectively resolve conflicts and manage stress. These competencies are essential for navigating the complexities of modern organizational environments. Moreover, EQ enhances adaptability and flexibility, enabling leaders to handle uncertainties and adapt to shifting circumstances. Emotionally intelligent leaders maintain composure under pressure, make well-informed decisions with ambiguous information and guide their teams through challenging situations. ... Balancing bold innovation with operational prudence is key, fostering a culture of experimentation while maintaining stability and sustainability. Continuous learning and adaptability are essential traits, enabling leaders to stay ahead of market shifts and ensure long-term organizational relevance. ... What is of equal importance is building an organizational architecture that has resources trained on emerging technologies and skills. Investing in continuous learning and upskilling ensures IT teams can adapt to technological advancements and can take advantage of those skills for organizations to stay relevant and competitive. Leaders must also ensure they are attracting and retaining top tech talent which is critical to sustaining innovation. 


Breaking the cloud monopoly

Data control has emerged as a leading pain point for enterprises using hyperscalers. Businesses that store critical data that powers their processes, compliance efforts, and customer services on hyperscaler platforms lack easy, on-demand access to it. Many hyperscaler providers enforce limits or lack full data portability, an issue compounded by vendor lock-in or the perception of it. SaaS services have notoriously opaque data retrieval processes that make it challenging to migrate to another platform or repurpose data for new solutions. Organizations are also realizing the intrinsic value of keeping data closer to home. Real-time data processing is critical to running operations efficiently in finance, healthcare, and manufacturing. Some AI tools require rapid access to locally stored data, and being dependent on hyperscaler APIs—or integrations—creates a bottleneck. Meanwhile, compliance requirements in regions with strict privacy laws, such as the European Union, dictate stricter data sovereignty strategies. With the rise of AI, companies recognize the opportunity to leverage AI agents that work directly with local data. Unlike traditional SaaS-based AI systems that must transmit data to the cloud for processing, local-first systems can operate within organizational firewalls and maintain complete control over sensitive information. This solves both the compliance and speed issues.

Humility is a superpower. Here’s how to practice it daily

There’s a concept called epistemic humility, which refers to a trait where you seek to learn on a deep level while actively acknowledging how much you don’t know. Approach each interaction with curiosity, an open mind, and an assumption you’ll learn something new. Ask thoughtful questions about other’s experiences, perspectives, and expertise. Then listen and show your genuine interest in their responses. Let them know what you just learned. By consistently being curious, you demonstrate you’re not above learning from others. Juan, a successful entrepreneur in the healthy beverage space, approaches life and grows his business with intellectual humility. He’s a deeply curious professional who seeks feedback and perspectives from customers, employees, advisers, and investors. Juan’s ongoing openness to learning led him to adapt faster to market changes in his beverage category: He quickly identifies shifting customer preferences as well as competitive threats, then rapidly tweaks his product offerings to keep competitors at bay. He has the humility to realize he doesn’t have all the answers and embraces listening to key voices that help make his business even more successful. ... Humility isn’t about diminishing oneself. It’s about having a balanced perspective about yourself while showing genuine respect and appreciation for others. 


AI took a huge leap in IQ, and now a quarter of Gen Z thinks AI is conscious

If you came of age during a pandemic when most conversations were mediated through screens, an AI companion probably doesn't feel very different from a Zoom class. So it’s maybe not a shock that, according to EduBirdie, nearly 70% of Gen Zers say “please” and “thank you” when talking to AI. Two-thirds of them use AI regularly for work communication, and 40% use it to write emails. A quarter use it to finesse awkward Slack replies, with nearly 20% sharing sensitive workplace information, such as contracts and colleagues’ personal details. Many of those surveyed rely on AI for various social situations, ranging from asking for days off to simply saying no. One in eight already talk to AI about workplace drama, and one in six have used AI as a therapist. ... But intelligence is not the same thing as consciousness. IQ scores don’t mean self-awareness. You can score a perfect 160 on a logic test and still be a toaster, if your circuits are wired that way. AI can only think in the sense that it can solve problems using programmed reasoning. You might say that I'm no different, just with meat, not circuits. But that would hurt my feelings, something you don't have to worry about with any current AI product. Maybe that will change someday, even someday soon. I doubt it, but I'm open to being proven wrong. 


How AI-driven development tools impact software observability

While AI routines have proven quite effective at taking real user monitoring traffic, generating a suite of possible tests and synthetic test data, and automating test runs on each pull request, any such system still requires humans who understand the intended business outcomes to use observability and regression testing tools to look for unintended consequences of change. “So the system just doesn’t behave well,” Puranik said. “So you fix it up with some prompt engineering. Or maybe you try a new model, to see if it improves things. But in the course of fixing that problem, you did not regress something that was already working. That’s the very nature of working with these AI systems right now — fixing one thing can often screw up something else where you didn’t know to look for it.” ... Even when developing with AI tools, added Hao Yang, head of AI at Splunk, “we’ve always relied on human gatekeepers to ensure performance. Now, with agentic AI, teams are finally automating some tasks, and taking the human out of the loop. But it’s not like engineers don’t care. They still need to monitor more, and know what an anomaly is, and the AI needs to give humans the ability to take back control. It will put security and observability back at the top of the list of critical features.”


The Future of Database Administration: Embracing AI, Cloud, and Automation

The office of the DBA has been that of storage management, backup, and performance fault resolution. Now, DBAs have no choice but to be involved in strategy initiatives since most of their work has been automated. For the last five years, organizations with structured workload management and automation frameworks in place have reported about 47% less time on routine maintenance. ... Enterprises are using multiple cloud platforms, making it necessary for DBAs to physically manage data consistency, security, and performance with varied environments. Concordant processes for deployment and infrastructure-as-code (IaC) tools have diminished many configuration errors, thus improving security. Also, the rise of demand for edge computing has driven the need for distributed database architectures. Such solutions allow organizations to process data near the source itself, which curtails latency during real-time decision-making from sectors such as healthcare and manufacturing. ... The future of database administration implies self-managing and AI-driven databases. These intelligent systems optimize performance, enforce security policies, and carry out upgrades autonomously, leading to a reduction in administrative burdens. Serverless databases, automatic scaling, and operating under a pay-per-query model are increasingly popular, providing organizations with the chance to optimize costs while ensuring efficiency. 


Introduction to Apache Kylin

Apache Kylin is an open-source OLAP engine built to bring sub-second query performance to massive datasets. Originally developed by eBay and later donated to the Apache Software Foundation, Kylin has grown into a widely adopted tool for big data analytics, particularly in environments dealing with trillions of records across complex pipelines. ... Another strength is Kylin’s unified big data warehouse architecture. It integrates natively with the Hadoop ecosystem and data lake platforms, making it a solid fit for organizations already invested in distributed storage. For visualization and business reporting, Kylin integrates seamlessly with tools like Tableau, Superset, and Power BI. It exposes query interfaces that allow us to explore data without needing to understand the underlying complexity. ... At the heart of Kylin is its data model, which is built using star or snowflake schemas to define the relationships between the underlying data tables. In this structure, we define dimensions, which are the perspectives or categories we want to analyze (like region, product, or time). Alongside them are measures, and aggregated numerical values such as total sales or average price. ... To achieve its speed, Kylin heavily relies on pre-computation. It builds indexes (also known as CUBEs) that aggregate data ahead of time based on the model dimensions and measures. 

Daily Tech Digest - April 21, 2025


Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine



Two ways AI hype is worsening the cybersecurity skills crisis

Another critical factor in the AI-skills shortage discussion is that attackers are also leveraging AI, putting defenders at an even greater disadvantage. Cybercriminals are using AI to generate more convincing phishing emails, automate reconnaissance, and develop malware that can evade detection. Meanwhile, security teams are struggling just to keep up. “AI exacerbates what’s already going on at an accelerated pace,” says Rona Spiegel, cyber risk advisor at GroScale and former cloud governance leader at Wells Fargo and Cisco. “In cybersecurity, the defenders have to be right all the time, while attackers only have to be right once. AI is increasing the probability of attackers getting it right more often.” ... “CISOs will have to be more tactical in their approach,” she explains. “There’s so much pressure for them to automate, automate, automate. I think it would be best if they could partner cross-functionality and focus on things like policy and urge the unification and simplification of how polices are adapted… and make sure how we’re educating the entire environment, the entire workforce, not just the cybersecurity.” Appayanna echoes this sentiment, arguing that when used correctly, AI can ease talent shortages rather than exacerbate them. 


Data mesh vs. data fabric vs. data virtualization: There’s a difference

“Data mesh is a decentralized model for data, where domain experts like product engineers or LLM specialists control and manage their own data,” says Ahsan Farooqi, global head of data and analytics, Orion Innovation. While data mesh is tied to certain underlying technologies, it’s really a shift in thinking more than anything else. In an organization that has embraced data mesh architecture, domain-specific data is treated as a product owned by the teams relevant to those domains. ... As Matt Williams, field CTO at Cornelis Networks, puts it, “Data fabric is an architecture and set of data services that provides intelligent, real-time access to data — regardless of where it lives — across on-prem, cloud, hybrid, and edge environments. This is the architecture of choice for large data centers across multiple applications.” ... Data virtualization is the secret sauce that can make that happen. “Data virtualization is a technology layer that allows you to create a unified view of data across multiple systems and allows the user to access, query, and analyze data without physically moving or copying it,” says Williams. That means you don’ t have to worry about reconciling different data stores or working with data that’s outdated. Data fabric uses data virtualization to produce that single pane of glass: It allows the user to see data as a unified set, even if that’s not the underlying physical reality.


Biometrics adoption strategies benefit when government direction is clear

Part of the problem seems to be the collision of private and public sector interests in digital ID use cases like right-to-work checks. They would fall outside the original conception of Gov.uk as a system exclusively for public sector interaction, but the business benefit they provide is strictly one of compliance. The UK government’s Office for Digital Identities and Attributes (OfDIA), meanwhile, brought the register of digital identity and attribute services to the public beta stage earlier this month. The register lists services certified to the digital identity and attributes trust framework to perform such compliance checks, and the recent addition of Gov.uk One Login provided the spark for the current industry conflagration. Age checks for access to online pornography in France now require a “double-blind” architecture to protect user privacy. The additional complexity still leaves clear roles, however, which VerifyMy and IDxLAB have partnered to fill. Yoti has signed up a French pay site, but at least one big international player would rather fight the age assurance rules in court. Aviation and border management is one area where the enforcement of regulations has benefited from private sector innovation. Preparation for Digital Travel Credentials is underway with Amadeus pitching its “journey pass” as a way to use biometrics at each touchpoint as part of a reimagined traveller experience. 



Will AI replace software engineers? It depends on who you ask

Effective software development requires "deep collaboration with other stakeholders, including researchers, designers, and product managers, who are all giving input, often in real time," said Callery-Colyne. "Dialogues around nuanced product and user information will occur, and that context must be infused into creating better code, which is something AI simply cannot do." The area where AIs and agents have been successful so far, "is that they don't work with customers directly, but instead assist the most expensive part of any IT, the programmers and software engineers," Thurai pointed out. "While the accuracy has improved over the years, Gen AI is still not 100% accurate. But based on my conversations with many enterprise developers, the technology cuts down coding time tremendously. This is especially true for junior to mid-senior level developers." AI software agents may be most helpful "when developers are racing against time during a major incident, to roll out a fixed code quickly, and have the systems back up and running," Thurai added. "But if the code is deployed in production as is, then it adds to tech debt and could eventually make the situation worse over the years, many incidents later."


Protected NHIs: Key to Cyber Resilience

We live where cyber threats is continually evolving. Cyber attackers are getting smarter and more sophisticated with their techniques. Traditional security measures no longer suffice. NHIs can be the critical game-changer that organizations have been looking for. So, why is this the case? Well, cyber attackers, in the current times, are not just targeting humans but machines as well. Remember that your IT includes computing resources like servers, applications, and services that all represent potential points of attack. Non-Human Identities have bridged the gap between human identities and machine identities, providing an added layer of protection. NHIs security is of utmost importance as these identities can have overarching permissions. One single mishap with an NHI can lead to severe consequences. ... Businesses are significantly relying on cloud-based services for a wide range of purposes, from storage solutions to sophisticated applications. That said, the increasing dependency on the cloud has elucidated the pressing need for more robust and sophisticated security protocols. An NHI management strategy substantially supports this quest for fortified cloud security. By integrating with your cloud services, NHIs ensure secured access, moderated control, and streamlined data exchanges, all of which are instrumental in the prevention of unauthorized accesses and data violations.


Job seekers using genAI to fake skills and credentials

“We’re seeing this a lot with our tech hires, and a lot of the sentence structure and overuse of buzzwords is making it super obvious,” said Joel Wolfe, president of HiredSupport, a California-based business process outsourcing (BPO) company. HiredSupport has more than 100 corporate clients globally, including companies in the eCommerce, SaaS, healthcare, and fintech sectors. Wolfe, who weighed in on the topic on LinkedIn, said he’s seeing AI-enhanced resumes “across all roles and positions, but most obvious in overembellished developer roles.” ... In general, employers generally say they don’t have a problem with applicants using genAI tools to write a resume, as long as it accurately represents a candidate’s qualifications and experience. ZipRecruiter, an online employment marketplace, said 67% of 800 employers surveyed reported they are open to candidates using genAI to help write their resumes, cover letters, and applications, according to its Q4 2024 Employer Report. Companies, however, face a growing threat from fake job seekers using AI to forge IDs, resumes, and interview responses. By 2028, a quarter of job candidates could be fake, according to Gartner Research. Once hired, impostors can then steal data, money, or install ransomware. ... Another downside to the growing flood of AI deep fake applicants is that it affects “real” job applicants’ chances of being hired.


How Will the Role of Chief AI Officer Evolve in 2025?

For now, the role is less about exploring the possibilities of AI and more about delivering on its immediate, concrete value. “This year, the role of the chief AI officer will shift from piloting AI initiatives to operationalizing AI at scale across the organization,” says Agarwal. And as for those potential upheavals down the road? CAIO officers will no doubt have to be nimble, but Martell doesn’t see their fundamental responsibilities changing. “You still have to gather the data within your company to be able to use with that model and then you still have to evaluate whether or not that model that you built is delivering against your business goals. That has never changed,” says Martell. ... AI is at the inflection point between hype and strategic value. “I think there's going to be a ton of pressure to find the right use cases and deploy AI at scale to make sure that we're getting companies to value,” says Foss. CAIOs could feel that pressure keenly this year as boards and other executive leaders increasingly ask to see ROI on massive AI investments. “Companies who have set these roles up appropriately, and more importantly the underlying work correctly, will see the ROI measurements, and I don't think that chief AI officers [at those] organizations should feel any pressure,” says Mohindra.


Cybercriminals blend AI and social engineering to bypass detection

With improved attack strategies, bad actors have compressed the average time from initial access to full control of a domain environment to less than two hours. Similarly, while a couple of years ago it would take a few days for attackers to deploy ransomware, it’s now being detonated in under a day and even in as few as six hours. With such short timeframes between the attack and the exfiltration of data, companies are simply not prepared. Historically, attackers avoided breaching “sensitive” industries like healthcare, utilities, and critical infrastructures because of the direct impact to people’s lives.  ... Going forward, companies will have to reconcile the benefits of AI with its many risks. Implementing AI solutions expands a company’s attack surface and increases the risk of data getting leaked or stolen by attackers or third parties. Threat actors are using AI efficiently, to the point where any AI employee training you may have conducted is already outdated. AI has allowed attackers to bypass all the usual red flags you’re taught to look for, like grammatical errors, misspelled words, non-regional speech or writing, and a lack of context to your organization. Adversaries have refined their techniques, blending social engineering with AI and automation to evade detection. 


AI in Cybersecurity: Protecting Against Evolving Digital Threats

As much as AI bolsters cybersecurity defenses, it also enhances the tools available to attackers. AI-powered malware, for example, can adapt its behavior in real time to evade detection. Similarly, AI enables cybercriminals to craft phishing schemes that mimic legitimate communications with uncanny accuracy, increasing the likelihood of success. Another alarming trend is the use of AI to automate reconnaissance. Cybercriminals can scan networks and systems for vulnerabilities more efficiently than ever before, highlighting the necessity for cybersecurity teams to anticipate and counteract AI-enabled threats. ... The integration of AI into cybersecurity raises ethical questions that must be addressed. Privacy concerns are at the forefront, as AI systems often rely on extensive data collection. This creates potential risks for mishandling or misuse of sensitive information. Additionally, AI’s capabilities for surveillance can lead to overreach. Governments and corporations may deploy AI tools for monitoring activities under the guise of security, potentially infringing on individual rights. There is also the risk of malicious actors repurposing legitimate AI tools for nefarious purposes. Clear guidelines and robust governance are crucial to ensuring responsible AI deployment in cybersecurity.


AI workloads set to transform enterprise networks

As AI companies leapfrog each other in terms of capabilities, they will be able to handle even larger conversations — and agentic AI may increase the bandwidth requirements exponentially and in unpredictable ways. Any website or app could become an AI app, simply by adding an AI-powered chatbot to it, says F5’s MacVittie. When that happens, a well-defined, structured traffic pattern will suddenly start looking very different. “When you put the conversational interfaces in front, that changes how that flow actually happens,” she says. Another AI-related challenge that networking managers will need to address is that of multi-cloud complexity. ... AI brings in a whole host of potential security problems for enterprises. The technology is new and unproven, and attackers are quickly developing new techniques for attacking AI systems and their components. That’s on top of all the traditional attack vectors, says Rich Campagna, senior vice president of product management at Palo Alto Networks. At the edge, devices and networks are often distributed which leads to visibility blind spots,” he adds. That makes it harder to fix problems if something goes wrong. Palo Alto is developing its own AI applications, Campagna says, and has been for years. And so are its customers. 


Daily Tech Digest - April 20, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti



The Digital Twin in Automotive: The Update

According to Digital Twin researcher Julian Gebhard, the industry is moving toward integrated federated systems that allow seamless data exchange and synchronization across tools and platforms. These systems rely on semantic models and knowledge graphs to ensure interoperability and data integrity throughout the product development process. By structuring data as semantic triples (e.g. (Car) → (is colored) → (blue)) data is traversable, transforming raw data to knowledge. Furthermore, it becomes machine-readable, an enabler for collaboration across departments making development more efficient and consistent. The next step is to use Knowledge Graphs to model product data on a value level, instead only connecting metadata. They enable dynamic feedback loops across systems, so that changes in one area, such as simulation results or geometry updates, can automatically influence related systems. This helps maintain consistency and accelerates iteration during development. Moreover, when functional data is represented at the value level, it becomes possible to integrate disparate systems such as simulation and CAD tools into a unified, holistic viewer. In this integrated model, any change in geometry in one system automatically triggers updates in simulation parameters and physical properties, ensuring that the digital twin evolves in tandem with the actual product. 


Wait, what is agentic AI?

AI agents are generally better than generative AI models at organizing, surfacing, and evaluating data. In theory, this makes them less prone to hallucinations. From the HBR article: “The greater cognitive reasoning of agentic AI systems means that they are less likely to suffer from the so-called hallucinations (or invented information) common to generative AI systems. Agentic AI systems also have [a] significantly greater ability to sift and differentiate information sources for quality and reliability, increasing the degree of trust in their decisions.” ... Agentic AI is a paradigm shift on the order of the emergence of LLMs or the shift to SaaS. That is to say, it’s a real thing, but we’re not yet close to understanding exactly how it will change the way we live and work just yet. The adoption curve for agentic AI will have its challenges. There are questions wherever you look: How do you put AI agents into production? How do you test and validate code generated by autonomous agents? How do you deal with security and compliance? What are the ethical implications of relying on AI agents? As we all navigate the adoption curve, we’ll do our best to help our community answer these questions. While building agents might quickly become easier, solving for these downstream impacts is still incomplete.


Contract-as-Code: Why Finance Teams Are Taking Over Your API Contracts

Forward-thinking companies are now applying cloud native principles to contract management. Just as infrastructure became code with tools like Terraform and Ansible, we’re seeing a similar transformation with business agreements becoming “contracts-as-code.” This shift integrates critical contract information directly into the CI/CD pipeline through APIs that connect legal document management with operational workflows. Contract experts at ContractNerds highlight how API connections enable automation and improve workflow management beyond what traditional contract lifecycle management systems can achieve alone. Interestingly, this cloud native contract revolution hasn’t been led by legal departments. From our experience working with over 1,500 companies, contract ownership is rapidly shifting to finance and operations teams, with CFOs becoming the primary stakeholders in contract management systems. ... As cloud native architectures mature, treating business contracts as code becomes essential for maintaining velocity. Successful organizations will break down the artificial boundary between technical contracts (APIs) and business contracts (legal agreements), creating unified systems where all obligations and dependencies are visible, trackable, and automatable.


ChatGPT can remember more about you than ever before – should you be worried?

Persistent memory could be hugely useful for work. Julian Wiffen, Chief of AI and Data Science at Matillion, a data integration platform with AI built in, sees strong use cases: “It could improve continuity for long-term projects, reduce repeated prompts, and offer a more tailored assistant experience," he says. But he’s also wary. “In practice, there are serious nuances that users, and especially companies, need to consider.” His biggest concerns here are privacy, control, and data security. ... OpenAI stresses that users can still manage memory – delete individual memories that aren't relevant anymore, turn it off entirely, or use the new “Temporary Chat” button. This now appears at the top of the chat screen for conversations that are not informed by past memories and won't be used to build new ones either. However, Wiffen says that might not be enough. “What worries me is the lack of fine-grained control and transparency,” he says. “It's often unclear what the model remembers, how long it retains information, and whether it can be truly forgotten.” ... “Even well-meaning memory features could accidentally retain sensitive personal data or internal information from projects. And from a security standpoint, persistent memory expands the attack surface.” This is likely why the new update hasn't rolled out globally yet.


How to deal with tech tariff terror

Are you confused about what President Donald J. Trump is doing with tariffs? Join the crowd; we all are. But if you’re in charge of buying PCs for your company (because Windows 10 officially reaches end-of-life status on Oct. 14) all this confusion is quickly turning into worry. Before diving into what this all means, let’s clarify one thing: you will be paying more for your technology gear — period, end of statement. ... As Ingram Micro CEO Paul Bay said in a CRN interview: “Tariffs will be passed through from the OEMs or vendors to distribution, then from distribution out to our solution providers and ultimately to the end users.” It’s already happening. Taiwan-based computing giant Acer’s CEO, Jason Chen, recently spelled it out cleanly: “10% probably will be the default price increase because of the import tax. It’s very straightforward.” When Trump came into office, we all knew there would be a ton of tariffs coming our way, especially on Chinese products such as Lenovo computers, or products largely made in China, such as those from Apple and Dell. ... But wait! It gets even murkier. Apparently that tariff “relief” is temporary and partial. US Commerce Secretary Howard Lutnick has already said that sector-specific tariffs targeting electronics are forthcoming, “probably a month or two.” Just to keep things entertaining, Trump himself has at times contradicted his own officials about the scope and duration of the exclusions.


AI Is Essential for Business Survival but It Doesn’t Guarantee Success

Li suggests companies look at how AI is integrated across the entire value chain. "To realize business value, you need to improve the whole value chain, not just certain steps." According to her, a comprehensive value chain framework includes suppliers, employees, customers, regulators, competitors, and the broader marketplace environment. For example, Li explains that when AI is applied internally to support employees, the focus is often on boosting productivity. However, using AI in customer-facing areas directly affects the products or services being delivered, which introduces higher risk. Similarly, automating processes for efficiency could influence interactions with suppliers — raising the question of whether those suppliers are prepared to adapt. ... Speaking of organizational challenges, Li discusses how positioning AI in business and positioning AI teams in organizations is critical. Based on the organization’s level of readiness and maturity, it could have a centralized or distributed, or federated model, but the focus should be on people. Thereafter, Li reminds that the organizational governance processes are related to its people, activities, and operating model. She adds, “If you already have an investment, evaluate and adjust your investment expectations based on the exercise.”


AI Regulation Versus AI Innovation: A Fake Dichotomy

The problem is that institutionalization without or with poor regulation – and we see algorithms as institutions – tends to move in an extractive direction, undermining development. If development requires technological innovation, Acemoglu, Johnson, and Robinson taught us that inclusive institutions that are transparent, equitable, and effective are needed. In a nutshell, long-term prosperity requires democracy and its key values. We must, therefore, democratize the institutions that play such a key role in shaping our contexts of interaction by affecting individual behaviors with collective implications. The only way to make algorithms more democratic is by regulating them, i.e., by creating rules that establish key values, procedures, and practices that ought to be respected if we, as members of political communities, are to have any control over our future. Democratic regulation of algorithms demands forms of participation, revisability, protection of pluralism, struggle against exclusion, complex output accountability, and public debate, to mention a few elements. We must bring these institutions closer to democratic principles, as we have tried to do with other institutions. When we consider inclusive algorithmic institutions, the value of equality plays a crucial role—often overlapping with the principle of participation. 


The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI Tools

The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. “Using AI at work feels like second nature for many knowledge workers now. Whether it’s summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.” If the official tools aren’t easy to access or if they feel too locked down, they’ll use whatever’s available which is often via an open tab on their browser. There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.


The Rise of the AI-Generated Fake ID

The rise of AI-generated IDs poses a serious threat to digital transactions for three key reasons.The physical and digital processes businesses use to catch fraudulent IDs are not created equal. Less sophisticated solutions may not be advanced enough to identify emerging fraud methods. With AI-generated ID images readily available on the dark web for as little as $5, ownership and usage are proliferating. IDScan.net research from 2024 demonstrated that ​​78% of consumers pointed to the misuse of AI as their core fear around identity protection. Equally, 55% believe current technology isn’t enough to protect our identities. Left unchallenged, AI fraud will damage consumer trust, purchasing behavior, and business bottom lines. Hiding behind the furor of nefarious, super-advanced AI, generating AI IDs is fairly rudimentary. Darkweb suppliers rely on PDF417 and ID image generators, using a degree of automation to match data inputs onto a contextual background. Easy-to-use tools such as Thispersondoesnotexist make it simple for anyone to cobble together a quality fake ID image and a synthetic identity. To deter potential AI-generated fake ID buyers from purchasing, the identity verification industry needs to demonstrate that our solutions are advanced enough to spot them, even as they increase in quality.


7 mistakes to avoid when turning a Raspberry Pi into a personal cloud

A Raspberry Pi may seem forgiving regarding power needs, but undervaluing its requirements can lead to sudden shutdowns and corrupted data. Cloud services that rely on a stable connection to read and write data need consistent energy for safe operation. A subpar power supply might struggle under peak usage, leading to instability or errors. Ensuring sufficient voltage and amperage is key to avoiding complications. A strong power supply reduces random reboots and performance bottlenecks. When the Pi experiences frequent resets, you risk damaging your data and your operating system’s integrity. In addition, any connected external drives might encounter file system corruption, harming stored data. Taking steps to confirm your power setup meets recommended standards goes a long way toward keeping your cloud server running reliably. ... A personal cloud server can create a false sense of security if you forget to establish a backup routine. Files stored on the Pi can be lost due to unexpected drive failures, accidents, or system corruption. Relying on a single storage device for everything contradicts the data redundancy principle. Setting up regular backups protects your data and helps you restore from mishaps with minimal downtime. Building a reliable backup process means deciding how often to copy your files and choosing safe locations to store them. 

Daily Tech Digest - April 19, 2025


Quote for the day:

"Good things come to people who wait, but better things come to those who go out and get them." -- Anonymous



AI Agents Are Coming to Work: Are Organizations Equipped?

The promise of agentic AI is already evident in organizations adopting it. Fiserv, the global fintech powerhouse, developed an agentic AI application that autonomously assigns merchant codes to businesses, reducing human intervention to under 1%. Sharbel Shaaya, director of AI operations and intelligent automation at Fiserv, said, "Tomorrow's agentic systems will handle this groundwork natively, amplifying their value." In the automotive world, Ford Motor Company is using agentic AI to amplify car design. Bryan Goodman, director of AI at Ford Motor Company, said, "Traditionally, Ford's designers sculpt physical clay models, a time-consuming process followed by lengthy engineering simulations. One computational fluid dynamics run used to take 15 hours, which AI model predicts the outcome in 10 seconds." ... In regulated industries, compliance adds complexity. Ramnik Bajaj, chief data and analytics officer at United Services Automobile Association, sees agentic AI interpreting requests in insurance but insists on human oversight for tasks such as claims adjudication. "Regulatory constraints demand a human in the loop," Bajaj said. Trust is another hurdle - 61% of organizations cite concerns about errors, bias and data quality. "Scaling AI requires robust governance. Without trust, pilots stay pilots," Sarker said.


Code, cloud, and culture: The tech blueprint transforming Indian workplaces

The shift to hybrid cloud infrastructure is enabling Indian enterprises to modernise their legacy systems while scaling with agility. According to a report by EY India, 90% of Indian businesses believe that cloud transformation is accelerating their AI initiatives. Hybrid cloud environments—which blend on-premise infrastructure with public and private cloud—are becoming the default architecture for industries like banking, insurance, and manufacturing. HDFC Bank, for example, has adopted a hybrid cloud model to offer hyper-personalised customer services and real-time transaction capabilities. This digital core is helping financial institutions respond faster to market changes while maintaining strict regulatory compliance. ... No technological transformation is complete without human capability. The demand for AI-skilled professionals in India has grown 14x between 2016 and 2023, and the country is expected to need over one million AI professionals by 2026. Companies are responding with aggressive reskilling strategies. ... The strategic convergence of AI, SaaS, cloud, and human capital is rewriting the rules of productivity, innovation, and global competitiveness. With forward-looking investments, grassroots upskilling efforts, and a vibrant startup culture, India is poised to define the future of work, not just for itself, but for the world.


Bridging the Gap Between Legacy Infrastructure and AI-Optimized Data Centers

Failure to modernize legacy infrastructure isn’t just a technical hurdle; it’s a strategic risk. Outdated systems increase operational costs, limit scalability, and create inefficiencies that hinder innovation. However, fully replacing existing infrastructure is rarely a practical or cost-effective solution. The path forward lies in a phased approach – modernizing legacy systems incrementally while introducing AI-optimized environments capable of meeting future demands. ... AI’s relentless demand for compute power requires a more diversified and resilient approach to energy sourcing. While Small modular reactors (SMRs) present a promising future solution for scalable, reliable, and low-carbon power generation, they are not yet equipped to serve critical loads in the near term. Consequently, many operators are prioritizing behind-the-meter (BTM) generation, primarily gas-focused solutions, with the potential to implement combined cycle technologies that capture and repurpose steam for additional energy efficiency. ... The future of AI-optimized data centers lies in adaptation, not replacement. Substituting legacy infrastructure on a large scale is prohibitively expensive and disruptive. Instead, a hybrid approach – layering AI-optimized environments alongside existing systems while incrementally retrofitting older infrastructure – provides a more pragmatic path forward.


Why a Culture of Observability Is Key to Technology Success

A successful observability strategy requires fostering a culture of shared responsibility for observability across all teams. By embedding observability throughout the software development life cycle, organizations create a proactive environment where issues are detected and resolved early. This will require observability buy-in across all teams within the organization. ... Teams that prioritize observability gain deeper insights into system performance and user experiences, resulting in faster incident resolution and improved service delivery. Promoting an organizational mindset that values transparency and continuous monitoring is key. ... Shifting observability left into the development process helps teams catch issues earlier, reducing the cost of fixing bugs and enhancing product quality. Developers can integrate observability into code from the outset, ensuring systems are instrumented and monitored at every stage. This is a key step toward the establishment of a culture of observability. ... A big part is making sure that all the stakeholders across the organization, whether high or low in the org chart, understand what’s going on. This means taking feedback. Leadership needs to be involved. This means communicating what you are doing, why you are doing it and what the implications are of doing or not doing it.


Why Agile Software Development is the Future of Engineering

Implementing iterative processes can significantly decrease time-to-market for projects. Statistics show that organizations using adaptive methodologies can increase their release frequency by up to 25%. This approach enables teams to respond promptly to market changes and customer feedback, leading to improved alignment with user expectations. Collaboration among cross-functional teams enhances productivity. In environments that prioritize teamwork, 85% of participants report higher engagement levels, which directly correlates with output quality. Structured daily check-ins allow for quick problem resolution, keeping projects on track and minimizing delays. Frequent iteration facilitates continuous testing and integration, which reduces errors early in the process. According to industry data, teams that deploy in short cycles experience up to 50% fewer defects compared to traditional methodologies. This not only expedites delivery but also enhances the overall reliability of the product. The focus on customer involvement significantly impacts product relevance. Engaging clients throughout the development process can lead to a 70% increase in user satisfaction, as adjustments are made in real time. Clients appreciate seeing their feedback implemented quickly, fostering a sense of ownership over the final product.


Why Risk Management Is Key to Sustainable Business Growth

Recent bank collapses demonstrate that a lack of effective risk management strategies can cause serious consequences for financial institutions, their customers, and the economy. A comprehensive risk management strategy is a tool to help banks protect assets, customers, and larger economic problems. ... Risk management is heavily driven by data analytics and identifying patterns in historical data. Predictive models and machine learning can forecast financial losses and detect risks and customer fraud. Additionally, banks can use predictive analytics for proactive decision-making. Data accuracy is of the utmost importance in this case because analysts use that information to make decisions about investments, customer loans, and more. Some banks rely on artificial intelligence (AI) to help detect customer defaults in more dynamic ways. For example, AI could be used in training cross-domain data to better understand customer behavior, or it could be used to make real-time decisions by incorporating real-time changes in the market data. It also improves the customer experience by offering answers through highly trained chatbots, thereby increasing customer satisfaction and reducing reputation risk. Enterprises are training generative AI (GenAI) to be virtual regulatory and policy experts to answer questions about regulations, company policies, and guidelines. 


How U.S. tariffs could impact cloud computing

Major cloud providers are commonly referred to as hyperscalers, and include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. They initially may absorb rising cloud costs to avoid risking market share by passing them on to customers. However, tariffs on hardware components such as servers and networking equipment will likely force them to reconsider their financial models, which means enterprises can expect eventual, if not immediate, price increases. ... As hyperscalers adapt to increased costs by exploring nearshoring or regional manufacturing, these shifts may permanently change cloud pricing dynamics. Enterprises that rely on public cloud services may need to plan for contract renegotiations and higher costs in the coming years, particularly as hardware supply chains remain volatile. The financial strain imposed by tariffs also has a ripple effect, indirectly affecting cloud adoption rates. ... Adaptability and agility remain essential for both providers and enterprises. For cloud vendors, resilience in the supply chain and efficiency in hardware will be critical. Meanwhile, enterprise leaders must balance cost containment with their broader strategic goals for digital growth. By implementing thoughtful planning and proactive strategies, organizations can navigate these challenges and continue to derive value from the cloud in the years ahead.


CIOs must mind their own data confidence gap

A lack of good data can lead to several problems, says Aidora’s Agarwal. C-level executives — even CIOs — may demand that new products be built when the data isn’t ready, leading to IT leaders who look incompetent because they repeatedly push back on timelines, or to those who pass burden down to their employees. “The teams may get pushed on to build the next set of things that they may not be ready to build,” he says. “This can result in failed initiatives, significantly delayed delivery, or burned-out teams.” To fix this data quality confidence gap, companies should focus on being more transparent across their org charts, Palaniappan advises. Lower-level IT leaders can help CIOs and the C-suite understand their organization’s data readiness needs by creating detailed roadmaps for IT initiatives, including a timeline to fix data problems, he says. “Take a ‘crawl, walk, run’ approach to drive this in the right direction, and put out a roadmap,” he says. “Look at your data maturity in order to execute your roadmap, and then slowly improve upon it.” Companies need strong data foundations, including data strategies focused on business cases, data accessibility, and data security, adds Softserve’s Myronov. Organizations should also employ skeptics to point out potential data problems during AI and other data-driven projects, he suggests.


AI has grown beyond human knowledge, says Google's DeepMind unit

Not only is human judgment an impediment, but the short, clipped nature of prompt interactions never allows the AI model to advance beyond question and answer. "In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and the agent responds," the researchers write. "The agent aims exclusively for outcomes within the current episode, such as directly answering a user's question." There's no memory, there's no continuity between snippets of interaction in prompting. "Typically, little or no information carries over from one episode to the next, precluding any adaptation over time," write Silver and Sutton. However, in their proposed Age of Experience, "Agents will inhabit streams of experience, rather than short snippets of interaction." Silver and Sutton draw an analogy between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task. ... The researchers suggest that the arrival of "thinking" or "reasoning" AI models, such as Gemini, DeepSeek's R1, and OpenAI's o1, may be surpassed by experience agents. The problem with reasoning agents is that they "imitate" human language when they produce verbose output about steps to an answer, and human thought can be limited by its embedded assumptions.


Understanding API Security: Insights from GoDaddy’s FTC Settlement

The FTC’s action against GoDaddy stemmed from the company’s inadequate security practices, which led to multiple data breaches from 2019 to 2022. These breaches exposed sensitive customer data, including usernames, passwords, and employee credentials. ... GoDaddy did not implement multi-factor authentication (MFA) and encryption, leaving customer data vulnerable. Without MFA and robust checks against credential stuffing, attackers could easily exploit stolen or weak credentials to access user accounts. Even with authentication, attackers can abuse authenticated sessions if the underlying API authorization is flawed. ... The absence of rate-limiting, logging, and anomaly detection allowed unauthorized access to 1.2 million customer records. More critically, this lack of deep inspection meant an inability to baseline normal API behavior and detect subtle reconnaissance or the exploitation of unique business logic flaws – attacks that often bypass traditional signature-based tools. ... Inadequate Access Controls: The exposure of admin credentials and encryption keys enabled attackers to compromise websites. Strong access controls are essential to restrict access to sensitive information to authorized personnel only. This highlights the risk not just of credential theft, but of authorization flaws within APIs themselves, where authenticated users gain access to data they shouldn’t.