Daily Tech Digest - April 20, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti



The Digital Twin in Automotive: The Update

According to Digital Twin researcher Julian Gebhard, the industry is moving toward integrated federated systems that allow seamless data exchange and synchronization across tools and platforms. These systems rely on semantic models and knowledge graphs to ensure interoperability and data integrity throughout the product development process. By structuring data as semantic triples (e.g. (Car) → (is colored) → (blue)) data is traversable, transforming raw data to knowledge. Furthermore, it becomes machine-readable, an enabler for collaboration across departments making development more efficient and consistent. The next step is to use Knowledge Graphs to model product data on a value level, instead only connecting metadata. They enable dynamic feedback loops across systems, so that changes in one area, such as simulation results or geometry updates, can automatically influence related systems. This helps maintain consistency and accelerates iteration during development. Moreover, when functional data is represented at the value level, it becomes possible to integrate disparate systems such as simulation and CAD tools into a unified, holistic viewer. In this integrated model, any change in geometry in one system automatically triggers updates in simulation parameters and physical properties, ensuring that the digital twin evolves in tandem with the actual product. 


Wait, what is agentic AI?

AI agents are generally better than generative AI models at organizing, surfacing, and evaluating data. In theory, this makes them less prone to hallucinations. From the HBR article: “The greater cognitive reasoning of agentic AI systems means that they are less likely to suffer from the so-called hallucinations (or invented information) common to generative AI systems. Agentic AI systems also have [a] significantly greater ability to sift and differentiate information sources for quality and reliability, increasing the degree of trust in their decisions.” ... Agentic AI is a paradigm shift on the order of the emergence of LLMs or the shift to SaaS. That is to say, it’s a real thing, but we’re not yet close to understanding exactly how it will change the way we live and work just yet. The adoption curve for agentic AI will have its challenges. There are questions wherever you look: How do you put AI agents into production? How do you test and validate code generated by autonomous agents? How do you deal with security and compliance? What are the ethical implications of relying on AI agents? As we all navigate the adoption curve, we’ll do our best to help our community answer these questions. While building agents might quickly become easier, solving for these downstream impacts is still incomplete.


Contract-as-Code: Why Finance Teams Are Taking Over Your API Contracts

Forward-thinking companies are now applying cloud native principles to contract management. Just as infrastructure became code with tools like Terraform and Ansible, we’re seeing a similar transformation with business agreements becoming “contracts-as-code.” This shift integrates critical contract information directly into the CI/CD pipeline through APIs that connect legal document management with operational workflows. Contract experts at ContractNerds highlight how API connections enable automation and improve workflow management beyond what traditional contract lifecycle management systems can achieve alone. Interestingly, this cloud native contract revolution hasn’t been led by legal departments. From our experience working with over 1,500 companies, contract ownership is rapidly shifting to finance and operations teams, with CFOs becoming the primary stakeholders in contract management systems. ... As cloud native architectures mature, treating business contracts as code becomes essential for maintaining velocity. Successful organizations will break down the artificial boundary between technical contracts (APIs) and business contracts (legal agreements), creating unified systems where all obligations and dependencies are visible, trackable, and automatable.


ChatGPT can remember more about you than ever before – should you be worried?

Persistent memory could be hugely useful for work. Julian Wiffen, Chief of AI and Data Science at Matillion, a data integration platform with AI built in, sees strong use cases: “It could improve continuity for long-term projects, reduce repeated prompts, and offer a more tailored assistant experience," he says. But he’s also wary. “In practice, there are serious nuances that users, and especially companies, need to consider.” His biggest concerns here are privacy, control, and data security. ... OpenAI stresses that users can still manage memory – delete individual memories that aren't relevant anymore, turn it off entirely, or use the new “Temporary Chat” button. This now appears at the top of the chat screen for conversations that are not informed by past memories and won't be used to build new ones either. However, Wiffen says that might not be enough. “What worries me is the lack of fine-grained control and transparency,” he says. “It's often unclear what the model remembers, how long it retains information, and whether it can be truly forgotten.” ... “Even well-meaning memory features could accidentally retain sensitive personal data or internal information from projects. And from a security standpoint, persistent memory expands the attack surface.” This is likely why the new update hasn't rolled out globally yet.


How to deal with tech tariff terror

Are you confused about what President Donald J. Trump is doing with tariffs? Join the crowd; we all are. But if you’re in charge of buying PCs for your company (because Windows 10 officially reaches end-of-life status on Oct. 14) all this confusion is quickly turning into worry. Before diving into what this all means, let’s clarify one thing: you will be paying more for your technology gear — period, end of statement. ... As Ingram Micro CEO Paul Bay said in a CRN interview: “Tariffs will be passed through from the OEMs or vendors to distribution, then from distribution out to our solution providers and ultimately to the end users.” It’s already happening. Taiwan-based computing giant Acer’s CEO, Jason Chen, recently spelled it out cleanly: “10% probably will be the default price increase because of the import tax. It’s very straightforward.” When Trump came into office, we all knew there would be a ton of tariffs coming our way, especially on Chinese products such as Lenovo computers, or products largely made in China, such as those from Apple and Dell. ... But wait! It gets even murkier. Apparently that tariff “relief” is temporary and partial. US Commerce Secretary Howard Lutnick has already said that sector-specific tariffs targeting electronics are forthcoming, “probably a month or two.” Just to keep things entertaining, Trump himself has at times contradicted his own officials about the scope and duration of the exclusions.


AI Is Essential for Business Survival but It Doesn’t Guarantee Success

Li suggests companies look at how AI is integrated across the entire value chain. "To realize business value, you need to improve the whole value chain, not just certain steps." According to her, a comprehensive value chain framework includes suppliers, employees, customers, regulators, competitors, and the broader marketplace environment. For example, Li explains that when AI is applied internally to support employees, the focus is often on boosting productivity. However, using AI in customer-facing areas directly affects the products or services being delivered, which introduces higher risk. Similarly, automating processes for efficiency could influence interactions with suppliers — raising the question of whether those suppliers are prepared to adapt. ... Speaking of organizational challenges, Li discusses how positioning AI in business and positioning AI teams in organizations is critical. Based on the organization’s level of readiness and maturity, it could have a centralized or distributed, or federated model, but the focus should be on people. Thereafter, Li reminds that the organizational governance processes are related to its people, activities, and operating model. She adds, “If you already have an investment, evaluate and adjust your investment expectations based on the exercise.”


AI Regulation Versus AI Innovation: A Fake Dichotomy

The problem is that institutionalization without or with poor regulation – and we see algorithms as institutions – tends to move in an extractive direction, undermining development. If development requires technological innovation, Acemoglu, Johnson, and Robinson taught us that inclusive institutions that are transparent, equitable, and effective are needed. In a nutshell, long-term prosperity requires democracy and its key values. We must, therefore, democratize the institutions that play such a key role in shaping our contexts of interaction by affecting individual behaviors with collective implications. The only way to make algorithms more democratic is by regulating them, i.e., by creating rules that establish key values, procedures, and practices that ought to be respected if we, as members of political communities, are to have any control over our future. Democratic regulation of algorithms demands forms of participation, revisability, protection of pluralism, struggle against exclusion, complex output accountability, and public debate, to mention a few elements. We must bring these institutions closer to democratic principles, as we have tried to do with other institutions. When we consider inclusive algorithmic institutions, the value of equality plays a crucial role—often overlapping with the principle of participation. 


The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI Tools

The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. “Using AI at work feels like second nature for many knowledge workers now. Whether it’s summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.” If the official tools aren’t easy to access or if they feel too locked down, they’ll use whatever’s available which is often via an open tab on their browser. There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.


The Rise of the AI-Generated Fake ID

The rise of AI-generated IDs poses a serious threat to digital transactions for three key reasons.The physical and digital processes businesses use to catch fraudulent IDs are not created equal. Less sophisticated solutions may not be advanced enough to identify emerging fraud methods. With AI-generated ID images readily available on the dark web for as little as $5, ownership and usage are proliferating. IDScan.net research from 2024 demonstrated that ​​78% of consumers pointed to the misuse of AI as their core fear around identity protection. Equally, 55% believe current technology isn’t enough to protect our identities. Left unchallenged, AI fraud will damage consumer trust, purchasing behavior, and business bottom lines. Hiding behind the furor of nefarious, super-advanced AI, generating AI IDs is fairly rudimentary. Darkweb suppliers rely on PDF417 and ID image generators, using a degree of automation to match data inputs onto a contextual background. Easy-to-use tools such as Thispersondoesnotexist make it simple for anyone to cobble together a quality fake ID image and a synthetic identity. To deter potential AI-generated fake ID buyers from purchasing, the identity verification industry needs to demonstrate that our solutions are advanced enough to spot them, even as they increase in quality.


7 mistakes to avoid when turning a Raspberry Pi into a personal cloud

A Raspberry Pi may seem forgiving regarding power needs, but undervaluing its requirements can lead to sudden shutdowns and corrupted data. Cloud services that rely on a stable connection to read and write data need consistent energy for safe operation. A subpar power supply might struggle under peak usage, leading to instability or errors. Ensuring sufficient voltage and amperage is key to avoiding complications. A strong power supply reduces random reboots and performance bottlenecks. When the Pi experiences frequent resets, you risk damaging your data and your operating system’s integrity. In addition, any connected external drives might encounter file system corruption, harming stored data. Taking steps to confirm your power setup meets recommended standards goes a long way toward keeping your cloud server running reliably. ... A personal cloud server can create a false sense of security if you forget to establish a backup routine. Files stored on the Pi can be lost due to unexpected drive failures, accidents, or system corruption. Relying on a single storage device for everything contradicts the data redundancy principle. Setting up regular backups protects your data and helps you restore from mishaps with minimal downtime. Building a reliable backup process means deciding how often to copy your files and choosing safe locations to store them. 

Daily Tech Digest - April 19, 2025


Quote for the day:

"Good things come to people who wait, but better things come to those who go out and get them." -- Anonymous



AI Agents Are Coming to Work: Are Organizations Equipped?

The promise of agentic AI is already evident in organizations adopting it. Fiserv, the global fintech powerhouse, developed an agentic AI application that autonomously assigns merchant codes to businesses, reducing human intervention to under 1%. Sharbel Shaaya, director of AI operations and intelligent automation at Fiserv, said, "Tomorrow's agentic systems will handle this groundwork natively, amplifying their value." In the automotive world, Ford Motor Company is using agentic AI to amplify car design. Bryan Goodman, director of AI at Ford Motor Company, said, "Traditionally, Ford's designers sculpt physical clay models, a time-consuming process followed by lengthy engineering simulations. One computational fluid dynamics run used to take 15 hours, which AI model predicts the outcome in 10 seconds." ... In regulated industries, compliance adds complexity. Ramnik Bajaj, chief data and analytics officer at United Services Automobile Association, sees agentic AI interpreting requests in insurance but insists on human oversight for tasks such as claims adjudication. "Regulatory constraints demand a human in the loop," Bajaj said. Trust is another hurdle - 61% of organizations cite concerns about errors, bias and data quality. "Scaling AI requires robust governance. Without trust, pilots stay pilots," Sarker said.


Code, cloud, and culture: The tech blueprint transforming Indian workplaces

The shift to hybrid cloud infrastructure is enabling Indian enterprises to modernise their legacy systems while scaling with agility. According to a report by EY India, 90% of Indian businesses believe that cloud transformation is accelerating their AI initiatives. Hybrid cloud environments—which blend on-premise infrastructure with public and private cloud—are becoming the default architecture for industries like banking, insurance, and manufacturing. HDFC Bank, for example, has adopted a hybrid cloud model to offer hyper-personalised customer services and real-time transaction capabilities. This digital core is helping financial institutions respond faster to market changes while maintaining strict regulatory compliance. ... No technological transformation is complete without human capability. The demand for AI-skilled professionals in India has grown 14x between 2016 and 2023, and the country is expected to need over one million AI professionals by 2026. Companies are responding with aggressive reskilling strategies. ... The strategic convergence of AI, SaaS, cloud, and human capital is rewriting the rules of productivity, innovation, and global competitiveness. With forward-looking investments, grassroots upskilling efforts, and a vibrant startup culture, India is poised to define the future of work, not just for itself, but for the world.


Bridging the Gap Between Legacy Infrastructure and AI-Optimized Data Centers

Failure to modernize legacy infrastructure isn’t just a technical hurdle; it’s a strategic risk. Outdated systems increase operational costs, limit scalability, and create inefficiencies that hinder innovation. However, fully replacing existing infrastructure is rarely a practical or cost-effective solution. The path forward lies in a phased approach – modernizing legacy systems incrementally while introducing AI-optimized environments capable of meeting future demands. ... AI’s relentless demand for compute power requires a more diversified and resilient approach to energy sourcing. While Small modular reactors (SMRs) present a promising future solution for scalable, reliable, and low-carbon power generation, they are not yet equipped to serve critical loads in the near term. Consequently, many operators are prioritizing behind-the-meter (BTM) generation, primarily gas-focused solutions, with the potential to implement combined cycle technologies that capture and repurpose steam for additional energy efficiency. ... The future of AI-optimized data centers lies in adaptation, not replacement. Substituting legacy infrastructure on a large scale is prohibitively expensive and disruptive. Instead, a hybrid approach – layering AI-optimized environments alongside existing systems while incrementally retrofitting older infrastructure – provides a more pragmatic path forward.


Why a Culture of Observability Is Key to Technology Success

A successful observability strategy requires fostering a culture of shared responsibility for observability across all teams. By embedding observability throughout the software development life cycle, organizations create a proactive environment where issues are detected and resolved early. This will require observability buy-in across all teams within the organization. ... Teams that prioritize observability gain deeper insights into system performance and user experiences, resulting in faster incident resolution and improved service delivery. Promoting an organizational mindset that values transparency and continuous monitoring is key. ... Shifting observability left into the development process helps teams catch issues earlier, reducing the cost of fixing bugs and enhancing product quality. Developers can integrate observability into code from the outset, ensuring systems are instrumented and monitored at every stage. This is a key step toward the establishment of a culture of observability. ... A big part is making sure that all the stakeholders across the organization, whether high or low in the org chart, understand what’s going on. This means taking feedback. Leadership needs to be involved. This means communicating what you are doing, why you are doing it and what the implications are of doing or not doing it.


Why Agile Software Development is the Future of Engineering

Implementing iterative processes can significantly decrease time-to-market for projects. Statistics show that organizations using adaptive methodologies can increase their release frequency by up to 25%. This approach enables teams to respond promptly to market changes and customer feedback, leading to improved alignment with user expectations. Collaboration among cross-functional teams enhances productivity. In environments that prioritize teamwork, 85% of participants report higher engagement levels, which directly correlates with output quality. Structured daily check-ins allow for quick problem resolution, keeping projects on track and minimizing delays. Frequent iteration facilitates continuous testing and integration, which reduces errors early in the process. According to industry data, teams that deploy in short cycles experience up to 50% fewer defects compared to traditional methodologies. This not only expedites delivery but also enhances the overall reliability of the product. The focus on customer involvement significantly impacts product relevance. Engaging clients throughout the development process can lead to a 70% increase in user satisfaction, as adjustments are made in real time. Clients appreciate seeing their feedback implemented quickly, fostering a sense of ownership over the final product.


Why Risk Management Is Key to Sustainable Business Growth

Recent bank collapses demonstrate that a lack of effective risk management strategies can cause serious consequences for financial institutions, their customers, and the economy. A comprehensive risk management strategy is a tool to help banks protect assets, customers, and larger economic problems. ... Risk management is heavily driven by data analytics and identifying patterns in historical data. Predictive models and machine learning can forecast financial losses and detect risks and customer fraud. Additionally, banks can use predictive analytics for proactive decision-making. Data accuracy is of the utmost importance in this case because analysts use that information to make decisions about investments, customer loans, and more. Some banks rely on artificial intelligence (AI) to help detect customer defaults in more dynamic ways. For example, AI could be used in training cross-domain data to better understand customer behavior, or it could be used to make real-time decisions by incorporating real-time changes in the market data. It also improves the customer experience by offering answers through highly trained chatbots, thereby increasing customer satisfaction and reducing reputation risk. Enterprises are training generative AI (GenAI) to be virtual regulatory and policy experts to answer questions about regulations, company policies, and guidelines. 


How U.S. tariffs could impact cloud computing

Major cloud providers are commonly referred to as hyperscalers, and include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. They initially may absorb rising cloud costs to avoid risking market share by passing them on to customers. However, tariffs on hardware components such as servers and networking equipment will likely force them to reconsider their financial models, which means enterprises can expect eventual, if not immediate, price increases. ... As hyperscalers adapt to increased costs by exploring nearshoring or regional manufacturing, these shifts may permanently change cloud pricing dynamics. Enterprises that rely on public cloud services may need to plan for contract renegotiations and higher costs in the coming years, particularly as hardware supply chains remain volatile. The financial strain imposed by tariffs also has a ripple effect, indirectly affecting cloud adoption rates. ... Adaptability and agility remain essential for both providers and enterprises. For cloud vendors, resilience in the supply chain and efficiency in hardware will be critical. Meanwhile, enterprise leaders must balance cost containment with their broader strategic goals for digital growth. By implementing thoughtful planning and proactive strategies, organizations can navigate these challenges and continue to derive value from the cloud in the years ahead.


CIOs must mind their own data confidence gap

A lack of good data can lead to several problems, says Aidora’s Agarwal. C-level executives — even CIOs — may demand that new products be built when the data isn’t ready, leading to IT leaders who look incompetent because they repeatedly push back on timelines, or to those who pass burden down to their employees. “The teams may get pushed on to build the next set of things that they may not be ready to build,” he says. “This can result in failed initiatives, significantly delayed delivery, or burned-out teams.” To fix this data quality confidence gap, companies should focus on being more transparent across their org charts, Palaniappan advises. Lower-level IT leaders can help CIOs and the C-suite understand their organization’s data readiness needs by creating detailed roadmaps for IT initiatives, including a timeline to fix data problems, he says. “Take a ‘crawl, walk, run’ approach to drive this in the right direction, and put out a roadmap,” he says. “Look at your data maturity in order to execute your roadmap, and then slowly improve upon it.” Companies need strong data foundations, including data strategies focused on business cases, data accessibility, and data security, adds Softserve’s Myronov. Organizations should also employ skeptics to point out potential data problems during AI and other data-driven projects, he suggests.


AI has grown beyond human knowledge, says Google's DeepMind unit

Not only is human judgment an impediment, but the short, clipped nature of prompt interactions never allows the AI model to advance beyond question and answer. "In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and the agent responds," the researchers write. "The agent aims exclusively for outcomes within the current episode, such as directly answering a user's question." There's no memory, there's no continuity between snippets of interaction in prompting. "Typically, little or no information carries over from one episode to the next, precluding any adaptation over time," write Silver and Sutton. However, in their proposed Age of Experience, "Agents will inhabit streams of experience, rather than short snippets of interaction." Silver and Sutton draw an analogy between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task. ... The researchers suggest that the arrival of "thinking" or "reasoning" AI models, such as Gemini, DeepSeek's R1, and OpenAI's o1, may be surpassed by experience agents. The problem with reasoning agents is that they "imitate" human language when they produce verbose output about steps to an answer, and human thought can be limited by its embedded assumptions.


Understanding API Security: Insights from GoDaddy’s FTC Settlement

The FTC’s action against GoDaddy stemmed from the company’s inadequate security practices, which led to multiple data breaches from 2019 to 2022. These breaches exposed sensitive customer data, including usernames, passwords, and employee credentials. ... GoDaddy did not implement multi-factor authentication (MFA) and encryption, leaving customer data vulnerable. Without MFA and robust checks against credential stuffing, attackers could easily exploit stolen or weak credentials to access user accounts. Even with authentication, attackers can abuse authenticated sessions if the underlying API authorization is flawed. ... The absence of rate-limiting, logging, and anomaly detection allowed unauthorized access to 1.2 million customer records. More critically, this lack of deep inspection meant an inability to baseline normal API behavior and detect subtle reconnaissance or the exploitation of unique business logic flaws – attacks that often bypass traditional signature-based tools. ... Inadequate Access Controls: The exposure of admin credentials and encryption keys enabled attackers to compromise websites. Strong access controls are essential to restrict access to sensitive information to authorized personnel only. This highlights the risk not just of credential theft, but of authorization flaws within APIs themselves, where authenticated users gain access to data they shouldn’t.

Daily Tech Digest - April 18, 2025


Quote for the day:

“Failures are finger posts on the road to achievement.” -- C.S. Lewis



How to Use Passive DNS To Trace Hackers Command And Control Infrastructure

This technology works through a network of sensors that monitor DNS query-response pairs, forwarding this information to central collection points for analysis without disrupting normal network operations. The resulting historical databases contain billions of unique records that security analysts can query to understand how domain names have resolved over time. ... When investigating potential threats, analysts can review months or even years of DNS resolution data without alerting adversaries to their investigation—a critical advantage when dealing with sophisticated threat actors. ... The true power of passive DNS in C2 investigation comes through various pivoting techniques that allow analysts to expand from a single indicator to map entire attack infrastructures. These techniques leverage the interconnected nature of DNS to reveal relationships between seemingly disparate domains and IP addresses. IP-based pivoting represents one of the most effective approaches. Starting with a known malicious IP address, analysts can query passive DNS to identify all domains that have historically resolved to that address. This technique often reveals additional malicious domains that share infrastructure but might otherwise appear unrelated.


Why digital identity is the cornerstone of trust in modern business

The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect. ... Digital identity is also a driver of customer experience. In today’s hyper-competitive digital landscape, the sign-up process can make or break a brand relationship. Clunky login screens or repeated verification prompts are quick ways to lose a customer. ... The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect.


Is your business ready for the IDP revolution?

AI-powered document processing offers significant advantages. Using advanced ML, IDP systems accurately interpret even complex and low-quality documents, including those with intricate tables and varying formats. This reduces manual work and the risk of human error. ... IDP also significantly improves data quality and accuracy by eliminating manual data entry, ensuring critical information is captured correctly and consistently. This leads to better decision-making, regulatory compliance and increased efficiency. IDP has wide-ranging applications. In healthcare, it speeds up claims processing and improves patient data management. In finance, it automates invoice processing and streamlines loan applications. In legal, it assists with contract analysis and due diligence. And in insurance, IDP automates information extraction from claims and reports, accelerating processing and boosting customer satisfaction. One specific example of this innovation in action is DocuWare’s own Intelligent Document Processing (DocuWare IDP). Our AI-powered solution streamlines how businesses handle even the most complex documents. Available as a standalone product, in the DocuWare Cloud or on-premises, DocuWare IDP automates text recognition, document classification and data extraction from various document types, including invoices, contracts and ID cards.


Practical Strategies to Overcome Cyber Security Compliance Standards Fatigue

The suitability of a cyber security framework must be determined based on applicable laws, industry standards, organizational risk profile, business goals, and resource constraints. It goes without saying that organizations providing critical services to the USA federal government will pursue NIST compliance while Small and Medium-sized Enterprises (SMEs) may want to focus on CIS Top 20, given resource constraints. Once the cyber security team has selected the most suitable framework, they should seek endorsement from the executive team or cyber risk governance committee to ensure shared sense of purpose. ... Mapping will enable organizations to identify overlapping controls to create a unified control set that addresses the requirements of multiple frameworks. This way, the organization can avoid redundant controls and processes, which in turn reduces cyber security team fatigue, accelerates innovation and lowers the cost of security. ... Cyber compliance standards play an integral role to ensure organizations prioritize the protection of consumer confidential and sensitive information above profits. But to reduce pressure on cyber teams already battling stress, cyber leaders must take a pragmatic approach that carefully balances compliance with innovation, agility and efficiency.


The Elaboration of a Modern TOGAF Architecture Maturity Model

This innovative TOGAF architecture maturity model provides a structured framework for assessing and enhancing an organization’s enterprise architecture capabilities in organizations that need to become more agile. By defining maturity levels across ten critical domains, the model enables organizations to transition from unstructured, reactive practices to well-governed, data-driven, and continuously optimized architectural processes. The five maturity levels—Initial, Under Development, Defined, Managed, and Measured—offer a clear roadmap for organizations to integrate EA into strategic decision-making, align business and IT investments, and establish governance frameworks that enhance operational efficiency. Through this approach, EA evolves from a support function into a key driver of innovation and business transformation. This model emphasizes continuous improvement and strategic alignment, ensuring that EA not only supports but actively contributes to an organization’s long-term success. By embedding EA into business strategy, security, governance, and solution delivery, enterprises can enhance agility, mitigate risks, and drive competitive advantage. Measuring EA’s impact through financial metrics and performance indicators further ensures that architecture initiatives provide tangible business value. 


Securing digital products under the Cyber Resilience Act

CRA explicitly states that products should have appropriate level of cybersecurity based on the risks, the risk based approach is fundamental in the regulation. This has the advantage that we can set the bar wherever we want as long as we make a good risk based argumentation for this level. This implies that we must have a methodical categorization of risk, hence we need application risk profiles. In order to implement this we can follow the quality criteria of maturity level 1, 2 and 3 of the application risk profiles practice. This includes having a clearly agreed upon, understood, accessible and updated risk classification system. ... Many companies already have SAMM assessments, if you do not have SAMM assessments but use another maturity framework such as OWASP DSOMM or NIST CSF you could use the available mappings to accelerate the translation to SAMM. Otherwise we recommend doing SAMM assessments and identifying the gaps in the processes needed. Then deciding on a roadmap to develop the processes and capabilities in time. ... In CRA we need to demonstrate that we have adequate security processes in place, and that we do not ship products with known vulnerabilities. So apart from having a good picture of the data flows we need to have a good picture of the processes in place.


Insider Threats, AI and Social Engineering: The Triad of Modern Cybersecurity Threats

Insiders who are targeted or influenced by external adversaries to commit data theft may not be addressed by traditional security solutions because attackers might use a combination of manipulation techniques with tactics to get access to the confidential data of an organization.  This can be seen in the case of Insider Threats carried out by Famous Chollima, a cyber-criminal group that targeted organizations through the employees, that were working for the criminal group. This criminal group collected individuals, falsified their identities, and helped them secure employment with the organization. Once inside, the group got access to sensitive information through the employees they helped get into the organization. ... Since AI can mimic user behavior, it is hard for security teams to detect the difference between normal activity and AI-generated activity. AI can also be used by insiders to assist in their plans, such as like an insider could use AI or train AI models to analyze user activity and pinpoint the window of least activity to deploy malware onto a critical system at an optimal time and disguise this activity under a legitimate action, to avoid detection with monitoring solutions.


How Successful Leaders Get More Done in Less Time

In order to be successful, leaders must make a conscious shift to move from reactive to intentional. They must guard their calendars, build in time for deep work, and set clear boundaries to focus on what truly drives progress. ... Time-blocking is one of the simplest, most powerful tools a leader can use. At its core, time-blocking is the practice of assigning specific blocks of time to different types of work: deep focus, meetings, admin, creative thinking or even rest. Why does it work? Because it eliminates context-switching, which is the silent killer of productivity. Instead of bouncing between tasks and losing momentum, time-blocking gives your day structure. It creates rhythm and ensures that what matters most actually gets done. ... Not everything on your to-do list matters. But without a clear system to prioritize, everything feels urgent. That's how leaders end up spending hours on reactive work while their most impactful tasks get pushed to "tomorrow." The fix? Use prioritization frameworks like the 80/20 rule (20% of tasks drive 80% of results) to stay focused on what actually moves the needle. ... If you're still doing everything yourself, there's a chance you're creating a bottleneck. The best leaders know that delegation buys back time and creates opportunities for others to grow. 


The tech backbone creating the future of infrastructure

Governments and administrators around the world are rapidly realizing the benefits of integrated infrastructure. A prime example is the growing trend for connecting utilities across borders to streamline operations and enhance efficiency. The Federal-State Modern Grid Deployment Initiative, involving 21 US states, is a major step towards modernizing the power grid, boosting reliability and enhancing resource management. Across the Atlantic, the EU is linking energy systems; by 2030, each member nation should be sharing at least 15% of its electricity production with its neighbors. On a smaller scale, the World Economic Forum is encouraging industrial clusters—including in China, Indonesia, Ohio and Australia—to share resources, infrastructure and risks to maximize economic and environmental value en route to net zero. ... Data is a nation’s most valuable asset. It is now being collected from multiple infrastructure points—traffic, energy grids, utilities. Infusing it with artificial intelligence (AI) in the cloud enables businesses to optimize their operations in real time. Centralizing this information, such as in an integrated command-and-control center, facilitates smoother collaboration and closer interaction among different sectors. 


No matter how advanced the technology is, it can all fall apart without strong security

One cybersecurity trend that truly excites me is the convergence of Artificial Intelligence (AI) with cybersecurity, especially in the areas of threat detection, incident response, and predictive risk management. This has motivated me to pursue a PhD in Cybersecurity using AI. Unlike traditional rule-based systems, AI is revolutionising cybersecurity by enabling proactive and adaptive defence strategies through contextual intelligence, shifting the focus from reactive to proactive measures. ... The real magic lies in combining AI with human judgement — what I often refer to as “human-in-the-loop cybersecurity.” This balance allows teams to scale faster, stay sharp, and focus on strategic defence instead of chasing every alert manually. What I have learnt from all this is the fusion of AI and cybersecurity is not just an enhancement, it’s a paradigm shift. However, the key is achieving balance. Hence, AI should augment human intelligence, rather than supplant them.... In the realm of financial cybersecurity, the most significant risk isn’t solely technical; it stems from the gap between security measures and business objectives. As the CISO, my responsibility extends beyond merely protecting against threats; I aim to integrate cybersecurity into the core of the organisation, transforming it into a strategic enabler rather than a reactive measure.

Daily Tech Digest - April 17, 2025


Quote for the day:

"We are only as effective as our people's perception of us." -- Danny Cox



Why data literacy is essential - and elusive - for business leaders in the AI age

The rising importance of data-driven decision-making is clear but elusive. However, the trust in the data underpinning these decisions is falling. Business leaders do not feel equipped to find, analyze, and interpret the data they need in an increasingly competitive business environment. The added complexity is the convergence of macro and micro uncertainties -- including economic, political, financial, technological, competitive landscape, and talent shortage variables.  ... The business need for greater adoption of AI capabilities, including predictive, generative and agentic AI solutions, is increasing the need for businesses to have confidence and trust in their data. Survey results show that higher adoption of AI will require stronger data literacy and access to trustworthy data. ... The alarming part of the survey is that 54% of business leaders are not confident in their ability to find, analyze, and interpret data on their own. And fewer than half of business leaders are sure they can use data to drive action and decision-making, generate and deliver timely insights, or effectively use data in their day-to-day work. Data literacy and confidence in the data are two growth opportunities for business leaders across all lines of business.


Cyber threats against energy sector surge as global tensions mount

These cyber-espionage campaigns are primarily driven by geopolitical considerations, as tensions shaped by the Russo-Ukraine war, the Gaza conflict, and the U.S.’ “great power struggle” with China are projected into cyberspace. With hostilities rising, potentially edging toward a third world war, rival nations are attempting to demonstrate their cyber-military capabilities by penetrating Western and Western-allied critical infrastructure networks. Fortunately, these nation-state campaigns have overwhelmingly been limited to espionage, as opposed to Stuxnet-style attacks intended to cause harm in the physical realm. A secondary driver of increasing cyberattacks against energy targets is technological transformation, marked by cloud adoption, which has largely mediated the growing convergence of IT and OT networks. OT-IT convergence across critical infrastructure sectors has thus made networked industrial Internet of Things (IIoT) appliances and systems more penetrable to threat actors. Specifically, researchers have observed that adversaries are using compromised IT environments as staging points to move laterally into OT networks. Compromising OT can be particularly lucrative for ransomware actors, because this type of attack enables adversaries to physically paralyze energy production operations, empowering them with the leverage needed to command higher ransom sums. 


The Active Data Architecture Era Is Here, Dresner Says

“The buildout of an active data architecture approach to accessing, combining, and preparing data speaks to a degree of maturity and sophistication in leveraging data as a strategic asset,” Dresner Advisory Services writes in the report. “It is not surprising, then, that respondents who rate their BI initiatives as a success place a much higher relative importance on active data architecture concepts compared with those organizations that are less successful.” Data integration is a major component of an active data architecture, but there are different ways that users can implement data integration. According to Dresner, the majority of active data architecture practitioners are utilizing batch and bulk data integration tools, such as ETL/ELT offerings. Fewer organizations are utilizing data virtualization as the primary data integration method, or real-time event streaming (i.e. Apache Kafka) or message-based data movement (i.e. RabbitMQ). Data catalogs and metadata management are important aspects of an active data architecture. “The diverse, distributed, connected, and dynamic nature of active data architecture requires capabilities to collect, understand, and leverage metadata describing relevant data sources, models, metrics, governance rules, and more,” Dresner writes. 


How can businesses solve the AI engineering talent gap?

“It is unclear whether nationalistic tendencies will encourage experts to remain in their home countries. Preferences may not only be impacted by compensation levels, but also by international attention to recent US treatment of immigrants and guests, as well as controversy at academic institutions,” says Bhattacharyya. But businesses can mitigate this global uncertainty, to some extent, by casting their hiring net wider to include remote working. Indeed, Thomas Mackenbrock, CEO-designate of Paris headquartered BPO giant Teleperformance says that the company’s global footprint helps it to fulfil AI skills demand. “We’re not reliant on any single market [for skills] as we are present in almost 100 markets,” explains Mackenbrock. ... “The future workforce will need to combine human ingenuity with new and emerging AI technologies; going beyond just technical skills alone,” says Khaled Benkrid, senior director of education and research at Arm. “Academic institutions play a pivotal role in shaping this future workforce. By collaborating with industry to conduct research and integrate AI into their curricula, they ensure that graduates possess the skills required by the industry. “Such collaborations with industry partners keep academic programs aligned with research frontiers and evolving job market demands, creating a seamless transition for students entering the workforce,” says Benkrid.


Breaking Down the Walls Between IT and OT

“Even though there's cyber on both sides, they are fundamentally different in concept,” Ian Bramson, vice president of global industrial cybersecurity at Black & Veatch, an engineering, procurement, consulting, and construction company, tells InformationWeek. “It's one of the things that have kept them more apart traditionally.” ... “OT is looked at as having a much longer lifespan, 30 to 50 years in some cases. An IT asset, the typical laptop these days that's issued to an individual in a company, three years is about when most organization start to think about issuing a replacement,” says Chris Hallenbeck, CISO for the Americas at endpoint management company Tanium. ... The skillsets required of the teams to operate IT and OT systems are also quite different. On one side, you likely have people skilled in traditional systems engineering. They may have no idea how to manage the programmable logic controllers (PLC) commonly used in OT systems. The divide between IT and OT has been, in some ways, purposeful. The Purdue model, for example, provides a framework for segmenting ICS networks, keeping them separate from corporate networks and the internet. ... Cyberattack vectors on IT and OT environments look different and result in different consequences. “On the IT side, the impact is primarily data loss and all of the second order effects of your data getting stolen or your data getting held for ransom,” says Shankar. 


Are Return on Equity and Value Creation New Metrics for CIOs?

While driving efficiency is not a new concept for technology leaders, what is different today is the scale and significance of their efforts. In many organizations, CIOs are being tasked with reimagining how value is generated, assessed and delivered. ... Traditionally, technology ROI discussions have focused on cost savings, automation consolidation and reduced headcount. But that perspective is shifting rapidly. CIOs are now prioritizing customer acquisition, retention pricing power and speed to market. CIOs also play an integral role in product innovation than ever before. To remain relevant, they must speak the language of gross margin, not just uptime. This evolution is increasingly reflected in boardroom conversations. CIOs once presented dashboards of uptime and service-level agreements, but today, they discuss customer value, operational efficiency and platform monetization. ... In some cases, technology leaders scale too quickly before proving value. For example, expensive cloud migrations may proceed without a corresponding shift in the business model. This can result in data lakes with no clear application or platforms launched without product-market fit. These missteps can severely undermine ROE. 


AI brings order to observability disorder

Artificial intelligence has contributed to complexity. Businesses now want to monitor large language models as well as applications to spot anomalies that may contribute to inaccuracies, bias, and slow performance. Legacy observability systems were never designed for the ability to bring together these disparate sources of data. A unified observability platform leveraging AI can radically simplify the tools and processes for improved visibility and resolving problems faster, enabling the business to optimize operations based on reliable insights. By consolidating on one set of integrated observability solutions, organizations can lower costs, simplify complex processes, and enable better cross-function collaboration. “Noise overwhelms site reliability engineering teams,” says Gagan Singh, Vice President of Product Marketing at Elastic. Irrelevant and low-priority alerts can overwhelm engineers, leading them to overlook critical issues and delaying incident response. Machine learning models are ideally suited to categorizing anomalies and surfacing relevant alerts so engineers can focus on critical performance and availability issues. “We can now leverage GenAI to enable SREs to surface insights more effectively,” Singh says.


Why Most IaC Strategies Still Fail — And How To Fix Them

There are a few common reasons IaC strategies fail in practice. Let’s explore what they are, and dive into some practical, battle-tested fixes to help teams regain control, improve consistency and deliver on the original promise of IaC. ... Without a unified direction, fragmentation sets in. Teams often get locked into incompatible tooling — some using AWS CloudFormation for perceived enterprise alignment, others favoring Terraform for its flexibility. These tool silos quickly become barriers to collaboration. ... Resistance to change also plays a role. Some engineers may prefer to stick with familiar interfaces and manual operations, viewing IaC as an unnecessary complication. Meanwhile, other teams might be fully invested in reusable modules and automated pipelines, leading to fractured workflows and collaboration breakdowns. Successful IaC implementation requires building skills, bridging silos and addressing resistance with empathy and training — not just tooling. To close the gap, teams need clear onboarding plans, shared coding standards and champions who can guide others through real-world usage — not just theory. ... Drift is inevitable: manual changes, rushed fixes and one-off permissions often leave code and reality out of sync. Without visibility into those deviations, troubleshooting becomes guesswork. 


What will the sustainable data center of the future look like?

The energy issue not only affects operators/suppliers. If a customer uses a lot of energy, they will get a bill to match, says Van den Bosch. “I [as a supplier] have to provide the customer with all kinds of details about my infrastructure. That includes everything from air conditioning to the specific energy consumption of the server racks. The customer is then able to reduce that energy consumption.” This can be done, for example, by replacing servers earlier than they have been before, a departure from the upgrade cycles of yesteryear. Ruud Mulder of Dell Technologies calls for the sustainability of equipment to be made measurable in great detail. This can be done by means of a digital passport, showing where all the materials come from and how recyclable they are. He thinks there is still much room for improvement in this area. For example, future designs can be recycled better by separating plastic and gold from each other, refurbishing components and more. This yield increase is often attractive, as more computing power is required for ambitious AI plans, and the efficiency of chips increases with each generation. “The transition to AI means that you sometimes have to say goodbye to your equipment sooner,” says Mulder. The AI issue is highly relevant to the future of the modern data center in any case. 


Fitness Functions for Your Architecture

Fitness functions offer us self-defined guardrails for certain aspects of our architecture. If we stay within certain (self-chosen) ranges, we're safe (our architecture is "good"). ... Many projects already use some kinds of fitness functions, although they might not use the term. For example, metrics from static code checkers, linters, and verification tools (such as PMD, FindBugs/SpotBugs, ESLint, SonarQube, and many more). Collecting the metrics alone doesn't make it a fitness function, though. You'll need fast feedback for your developers, and you need to define clear measures: limits or ranges for tolerated violations and actions to take if a metric indicates a violation. In software architecture, we have certain architectural styles and patterns to structure our code in order to improve understandability, maintainability, replaceability, and so on. Maybe the most well-known pattern is a layered architecture with, quite often, a front-end layer above a back-end layer. To take advantage of such layering, we'll allow and disallow certain dependencies between the layers. Usually, dependencies are allowed from top to down, i.e. from the front end to the back end, but not the other way around. A fitness function for a layered architecture will analyze the code to find all dependencies between the front end and the back end.

Daily Tech Digest - April 16, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How to lead humans in the age of AI

Quiet the noise around AI and you will find the simple truth that the most crucial workplace capabilities remain deeply human. ... This human skills gap is even more urgent when Gen Z is factored in. They entered the workforce aligned with a shift to remote and hybrid environments, resulting in fewer opportunities to hone interpersonal skills through real-life interactions. This is not a critique of an entire generation, but rather an acknowledgment of a broad workplace challenge. And Gen Z is not alone in needing to strengthen communication across generational divides, but that is a topic for another day. ... Leaders must embrace their inner improviser. Yes, improvisation, like what you have watched on Whose Line Is It Anyway? Or the awkward performance your college roommate invited you to in that obscure college lounge. The skills of an improviser are a proven method for striving amidst uncertainty. Decades of experience at Second City Works and studies published by The Behavioral Scientist confirm the principles of improv equip us to handle change with agility, empathy, and resilience. ... Make listening intentional and visible. Respond with the phrase, “So what I’m hearing is,” followed by paraphrasing what you heard. Pose thoughtful questions that indicate your priority is understanding, not just replying. 


When companies merge, so do their cyber threats

Merging two companies means merging two security cultures. That is often harder than unifying tools or policies. While the technical side of post-M&A integration is important, it’s the human and procedural elements that often introduce the biggest risks. “When CloudSploit was acquired, one of the most underestimated challenges wasn’t technical, it was cultural,” said Josh Rosenthal, Holistic Customer Success Executive at REPlexus.com. “Connecting two companies securely is incredibly complex, even when the acquired company is much smaller.” Too often, the focus in M&A deals lands on surface-level assurances like SOC 2 certifications or recent penetration tests. While important, those are “table stakes,” Rosenthal noted. “They help, but they don’t address the real friction: mismatched security practices, vendor policies, and team behaviors. That’s where M&A cybersecurity risk really lives.” As AI accelerates the speed and scale of attacks, CISOs are under increasing pressure to ensure seamless integration. “Even a phishing attack targeting a vendor onboarding platform can introduce major vulnerabilities during the M&A process,” Rosenthal warned. To stay ahead of these risks, he said, smart security leaders need to dig deeper than documentation.


Measuring success in dataops, data governance, and data security

If you are on a data governance or security team, consider the metrics that CIOs, chief information security officers (CISOs), and chief data officers (CDOs) will consider when prioritizing investments and the types of initiatives to focus on. Amer Deeba, GVP of Proofpoint DSPM Group, says CIOs need to understand what percentage of their data is valuable or sensitive and quantify its importance to the business—whether it supports revenue, compliance, or innovation. “Metrics like time-to-insight, ROI from tools, cost savings from eliminating unused shadow data, or percentage of tools reducing data incidents are all good examples of metrics that tie back to clear value,” says Deeba. ... Dataops technical strategies include data pipelines to move data, data streaming for real-time data sources like IoT, and in-pipeline data quality automations. Using the reliability of water pipelines as an analogy is useful because no one wants pipeline blockages, leaky pipes, pressure drops, or dirty water from their plumbing systems. “The effectiveness of dataops can be measured by tracking the pipeline success-to-failure ratio and the time spent on data preparation,” says Sunil Kalra, practice head of data engineering at LatentView. “Comparing planned deployments with unplanned deployments needed to address issues can also provide insights into process efficiency.”


How Safe Is the Code You Don’t Write? The Risks of Third-Party Software

Open-source and commercial packages and public libraries accelerate innovation, drive down development costs, and have become the invisible scaffolding of the Internet. GitHub recently highlighted that 99% of all software projects use third-party components. But with great reuse comes great risk. Third-party code is a double-edged sword. On the one hand, it’s indispensable. On the other hand, it’s a potential liability. In our race to deliver software faster, we’ve created sprawling software supply chains with thousands of dependencies, many of which receive little scrutiny after the initial deployment. These dependencies often pull in other dependencies, each one potentially introducing outdated, vulnerable, or even malicious code into environments that power business-critical operations. ... The risk is real, so what do we do? We can start by treating third-party code with the same caution and scrutiny we apply to everything else that enters the production pipeline. This includes maintaining a living inventory of all third-party components across every application and monitoring their status to prescreen updates and catch suspicious changes. With so many ways for threats to hide, we can’t take anything on trust, so next comes actively checking for outdated or vulnerable components as well as new vulnerabilities introduced by third-party code. 


The AI Leadership Crisis: Why Chief AI Officers Are Failing (And How To Fix It)

Perhaps the most dangerous challenge facing CAIOs is the profound disconnect between expectations and reality. Many boards anticipate immediate, transformative results from AI initiatives – the digital equivalent of demanding harvest without sowing. AI transformation isn't a sprint; it's a marathon with hurdles. Meaningful implementation requires persistent investment in data infrastructure, skills development, and organizational change management. Yet CAIOs often face arbitrary deadlines that are disconnected from these realities. One manufacturing company I worked with expected their newly appointed CAIO to deliver $50 million in AI-driven cost savings within 12 months. When those unrealistic targets weren't met, support for the role evaporated – despite significant progress in building foundational capabilities. ... There are many potential risks of AI, from bias to privacy concerns, and the right level of governance is essential. CAIOs are typically tasked with ensuring responsible AI use yet frequently lack the authority to enforce guidelines across departments. This accountability-without-authority dilemma places CAIOs in an impossible position. They're responsible for AI ethics and risk management, but departmental leaders can ignore their guidance with minimal consequences.


OT security: how AI is both a threat and a protector

Burying one’s head in the sand, a favorite pastime among some OT personnel, no longer works. Security through obscurity is and remains a bad idea. Heinemeyer: “I’m not saying that everyone will be hacked, but it is increasingly likely these days.” Possibly, the ostrich policy has to do with, yes, the reporting on OT vulnerabilities, including by yours truly. Ancient protocols, ICS systems and PLCs with exploitable vulnerabilities are evidently risk factors. However, the people responsible for maintaining these systems at manufacturing and utility facilities know better than anyone that the actual exploits of these obscure systems are improbable. ... Given the increasing threat, is the new focus on common best practices enough? We have already concluded that vulnerabilities should not be judged solely on the CVSS score. They are an indication, certainly, but a combination of CVEs with middle-of-the-range scoring appears to have the most serious consequences. Heinemeyer says that the resolve to identify all vulnerabilities as the ultimate solution was well established from the 1990s to the 2010s. He says that in recent years, security professionals have realized that specific issues need to be prioritized, quantifying technical exploitability through various measurements (e.g., EPSS). 


In a Social Engineering Showdown: AI Takes Red Teams to the Mat

In a revelation that shouldn’t surprise, but still should alarm security professionals, AI has gotten much more proficient in social engineering. Back in the day, AI was 31% less effective than human beings in creating simulated phishing campaigns. But now, new research from Hoxhunt suggests that the game-changing technology’s phishing performance against elite human red teams has improved by 55%. ... Using AI offensively can raise legal and regulatory hackles related to privacy laws and ethical standards, Soroko adds, as well as creating a dependency risk. “Over-reliance on AI could diminish human expertise and intuition within cybersecurity teams.” But that doesn’t mean bad actors will win the day or get the best of cyber defenders. Instead, security teams could and should turn the tables on them. “The same capabilities that make AI an effective phishing engine can — and must — be used to defend against it,” says Avist. With an emphasis on “must.” ... It seems that tried and true basics are a good place to start. “Ensuring transparency, accountability and responsible use of AI in offensive cybersecurity is crucial,” Kowski. As with any aspect of tech and security, keeping AI models “up-to-date with the latest threat intelligence and attack techniques is also crucial,” he says. “Balancing AI capabilities with human expertise remains a key challenge.”


Optimizing CI/CD for Trust, Observability and Developer Well-Being

While speed is often cited as a key metric for CI/CD pipelines, the quality and actionability of the feedback provided are equally, if not more, important for developers. Jones, emphasizing the need for deep observability, stresses, “Don’t just tell me that the steps of the pipeline succeeded or failed, quantify that success or failure. Show me metrics on test coverage and show me trends and performance-related details. I want to see stack traces when things fail. I want to be able to trace key systems even if they aren’t related to code that I’ve changed because we have large complex architectures that involve a lot of interconnected capabilities that all need to work together.” This level of technical insight empowers developers to understand and resolve issues quickly, highlighting the importance of implementing comprehensive monitoring and logging within your CI/CD pipeline to provide developers with detailed insights into build, test, and deployment processes. And shifting feedback earlier in the development lifecycle serves everyone well. The key is shifting feedback earlier in the process, ensuring it is contextual, before code is merged. For example, running security scans at the pull request stage, rather than after deployment, ensures developers get actionable feedback while still in context. 


AI agents vs. agentic AI: What do enterprises want?

If AI and AI agents are application components, then they fit both into business processes and workflow. A business process is a flow, and these days at least part of that flow is the set of data exchanges among applications or their components—what we typically call a “workflow.” It’s common to think of the process of threading workflows through both applications and workers as a process separate from the applications themselves. Remember the “enterprise service bus”? That’s still what most enterprises prefer for business processes that involve AI. Get an AI agent that does something, give it the output of some prior step, and let it then create output for the step beyond it. The decision as to whether an AI agent is then “autonomous” is really made by whether its output goes to a human for review or is simply accepted and implemented. ... What enterprises like about their vision of an AI agent is that it’s possible to introduce AI into a business process without having AI take over the process or require the process be reshaped to accommodate AI. Tech adoption has long favored strategies that let you limit scope of impact, to control both cost and the level of disruption the technology creates. This favors having AI integrated with current applications, which is why enterprises have always thought of AI improvements to their business operation overall as being linked to incorporating AI into business analytics.


Liquid Cooling is ideal today, essential tomorrow, says HPE CTO

We’re moving from standard consumption levels—like 1 kilowatt per rack—to as high as 3 kilowatts or more. The challenge lies in provisioning that much power and doing it sustainably. Some estimates suggest that data centers, which currently account for about 1% of global power consumption, could rise to 5% if trends continue. This is why sustainability isn’t just a checkbox anymore—it’s a moral imperative. I often ask our customers: Who do you think the world belongs to? Most pause and reflect. My view is that we’re simply renting the world from our grandchildren. That thought should shape how we design infrastructure today. ... Air cooling works until a point. But as components become denser, with more transistors per chip, air struggles. You’d need to run fans faster and use more chilled air to dissipate heat, which is energy-intensive. Liquid, due to its higher thermal conductivity and density, absorbs and transfers heat much more efficiently. Some DLC systems use cold plates only on select components. Others use them across the board. There are hybrid solutions too, combining liquid and air. But full DLC systems, like ours, eliminate the need for fans altogether. ... Direct liquid cooling (DLC) is becoming essential as data centers support AI and HPC workloads that demand high performance and density.