Daily Tech Digest - April 19, 2025


Quote for the day:

"Good things come to people who wait, but better things come to those who go out and get them." -- Anonymous



AI Agents Are Coming to Work: Are Organizations Equipped?

The promise of agentic AI is already evident in organizations adopting it. Fiserv, the global fintech powerhouse, developed an agentic AI application that autonomously assigns merchant codes to businesses, reducing human intervention to under 1%. Sharbel Shaaya, director of AI operations and intelligent automation at Fiserv, said, "Tomorrow's agentic systems will handle this groundwork natively, amplifying their value." In the automotive world, Ford Motor Company is using agentic AI to amplify car design. Bryan Goodman, director of AI at Ford Motor Company, said, "Traditionally, Ford's designers sculpt physical clay models, a time-consuming process followed by lengthy engineering simulations. One computational fluid dynamics run used to take 15 hours, which AI model predicts the outcome in 10 seconds." ... In regulated industries, compliance adds complexity. Ramnik Bajaj, chief data and analytics officer at United Services Automobile Association, sees agentic AI interpreting requests in insurance but insists on human oversight for tasks such as claims adjudication. "Regulatory constraints demand a human in the loop," Bajaj said. Trust is another hurdle - 61% of organizations cite concerns about errors, bias and data quality. "Scaling AI requires robust governance. Without trust, pilots stay pilots," Sarker said.


Code, cloud, and culture: The tech blueprint transforming Indian workplaces

The shift to hybrid cloud infrastructure is enabling Indian enterprises to modernise their legacy systems while scaling with agility. According to a report by EY India, 90% of Indian businesses believe that cloud transformation is accelerating their AI initiatives. Hybrid cloud environments—which blend on-premise infrastructure with public and private cloud—are becoming the default architecture for industries like banking, insurance, and manufacturing. HDFC Bank, for example, has adopted a hybrid cloud model to offer hyper-personalised customer services and real-time transaction capabilities. This digital core is helping financial institutions respond faster to market changes while maintaining strict regulatory compliance. ... No technological transformation is complete without human capability. The demand for AI-skilled professionals in India has grown 14x between 2016 and 2023, and the country is expected to need over one million AI professionals by 2026. Companies are responding with aggressive reskilling strategies. ... The strategic convergence of AI, SaaS, cloud, and human capital is rewriting the rules of productivity, innovation, and global competitiveness. With forward-looking investments, grassroots upskilling efforts, and a vibrant startup culture, India is poised to define the future of work, not just for itself, but for the world.


Bridging the Gap Between Legacy Infrastructure and AI-Optimized Data Centers

Failure to modernize legacy infrastructure isn’t just a technical hurdle; it’s a strategic risk. Outdated systems increase operational costs, limit scalability, and create inefficiencies that hinder innovation. However, fully replacing existing infrastructure is rarely a practical or cost-effective solution. The path forward lies in a phased approach – modernizing legacy systems incrementally while introducing AI-optimized environments capable of meeting future demands. ... AI’s relentless demand for compute power requires a more diversified and resilient approach to energy sourcing. While Small modular reactors (SMRs) present a promising future solution for scalable, reliable, and low-carbon power generation, they are not yet equipped to serve critical loads in the near term. Consequently, many operators are prioritizing behind-the-meter (BTM) generation, primarily gas-focused solutions, with the potential to implement combined cycle technologies that capture and repurpose steam for additional energy efficiency. ... The future of AI-optimized data centers lies in adaptation, not replacement. Substituting legacy infrastructure on a large scale is prohibitively expensive and disruptive. Instead, a hybrid approach – layering AI-optimized environments alongside existing systems while incrementally retrofitting older infrastructure – provides a more pragmatic path forward.


Why a Culture of Observability Is Key to Technology Success

A successful observability strategy requires fostering a culture of shared responsibility for observability across all teams. By embedding observability throughout the software development life cycle, organizations create a proactive environment where issues are detected and resolved early. This will require observability buy-in across all teams within the organization. ... Teams that prioritize observability gain deeper insights into system performance and user experiences, resulting in faster incident resolution and improved service delivery. Promoting an organizational mindset that values transparency and continuous monitoring is key. ... Shifting observability left into the development process helps teams catch issues earlier, reducing the cost of fixing bugs and enhancing product quality. Developers can integrate observability into code from the outset, ensuring systems are instrumented and monitored at every stage. This is a key step toward the establishment of a culture of observability. ... A big part is making sure that all the stakeholders across the organization, whether high or low in the org chart, understand what’s going on. This means taking feedback. Leadership needs to be involved. This means communicating what you are doing, why you are doing it and what the implications are of doing or not doing it.


Why Agile Software Development is the Future of Engineering

Implementing iterative processes can significantly decrease time-to-market for projects. Statistics show that organizations using adaptive methodologies can increase their release frequency by up to 25%. This approach enables teams to respond promptly to market changes and customer feedback, leading to improved alignment with user expectations. Collaboration among cross-functional teams enhances productivity. In environments that prioritize teamwork, 85% of participants report higher engagement levels, which directly correlates with output quality. Structured daily check-ins allow for quick problem resolution, keeping projects on track and minimizing delays. Frequent iteration facilitates continuous testing and integration, which reduces errors early in the process. According to industry data, teams that deploy in short cycles experience up to 50% fewer defects compared to traditional methodologies. This not only expedites delivery but also enhances the overall reliability of the product. The focus on customer involvement significantly impacts product relevance. Engaging clients throughout the development process can lead to a 70% increase in user satisfaction, as adjustments are made in real time. Clients appreciate seeing their feedback implemented quickly, fostering a sense of ownership over the final product.


Why Risk Management Is Key to Sustainable Business Growth

Recent bank collapses demonstrate that a lack of effective risk management strategies can cause serious consequences for financial institutions, their customers, and the economy. A comprehensive risk management strategy is a tool to help banks protect assets, customers, and larger economic problems. ... Risk management is heavily driven by data analytics and identifying patterns in historical data. Predictive models and machine learning can forecast financial losses and detect risks and customer fraud. Additionally, banks can use predictive analytics for proactive decision-making. Data accuracy is of the utmost importance in this case because analysts use that information to make decisions about investments, customer loans, and more. Some banks rely on artificial intelligence (AI) to help detect customer defaults in more dynamic ways. For example, AI could be used in training cross-domain data to better understand customer behavior, or it could be used to make real-time decisions by incorporating real-time changes in the market data. It also improves the customer experience by offering answers through highly trained chatbots, thereby increasing customer satisfaction and reducing reputation risk. Enterprises are training generative AI (GenAI) to be virtual regulatory and policy experts to answer questions about regulations, company policies, and guidelines. 


How U.S. tariffs could impact cloud computing

Major cloud providers are commonly referred to as hyperscalers, and include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. They initially may absorb rising cloud costs to avoid risking market share by passing them on to customers. However, tariffs on hardware components such as servers and networking equipment will likely force them to reconsider their financial models, which means enterprises can expect eventual, if not immediate, price increases. ... As hyperscalers adapt to increased costs by exploring nearshoring or regional manufacturing, these shifts may permanently change cloud pricing dynamics. Enterprises that rely on public cloud services may need to plan for contract renegotiations and higher costs in the coming years, particularly as hardware supply chains remain volatile. The financial strain imposed by tariffs also has a ripple effect, indirectly affecting cloud adoption rates. ... Adaptability and agility remain essential for both providers and enterprises. For cloud vendors, resilience in the supply chain and efficiency in hardware will be critical. Meanwhile, enterprise leaders must balance cost containment with their broader strategic goals for digital growth. By implementing thoughtful planning and proactive strategies, organizations can navigate these challenges and continue to derive value from the cloud in the years ahead.


CIOs must mind their own data confidence gap

A lack of good data can lead to several problems, says Aidora’s Agarwal. C-level executives — even CIOs — may demand that new products be built when the data isn’t ready, leading to IT leaders who look incompetent because they repeatedly push back on timelines, or to those who pass burden down to their employees. “The teams may get pushed on to build the next set of things that they may not be ready to build,” he says. “This can result in failed initiatives, significantly delayed delivery, or burned-out teams.” To fix this data quality confidence gap, companies should focus on being more transparent across their org charts, Palaniappan advises. Lower-level IT leaders can help CIOs and the C-suite understand their organization’s data readiness needs by creating detailed roadmaps for IT initiatives, including a timeline to fix data problems, he says. “Take a ‘crawl, walk, run’ approach to drive this in the right direction, and put out a roadmap,” he says. “Look at your data maturity in order to execute your roadmap, and then slowly improve upon it.” Companies need strong data foundations, including data strategies focused on business cases, data accessibility, and data security, adds Softserve’s Myronov. Organizations should also employ skeptics to point out potential data problems during AI and other data-driven projects, he suggests.


AI has grown beyond human knowledge, says Google's DeepMind unit

Not only is human judgment an impediment, but the short, clipped nature of prompt interactions never allows the AI model to advance beyond question and answer. "In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and the agent responds," the researchers write. "The agent aims exclusively for outcomes within the current episode, such as directly answering a user's question." There's no memory, there's no continuity between snippets of interaction in prompting. "Typically, little or no information carries over from one episode to the next, precluding any adaptation over time," write Silver and Sutton. However, in their proposed Age of Experience, "Agents will inhabit streams of experience, rather than short snippets of interaction." Silver and Sutton draw an analogy between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task. ... The researchers suggest that the arrival of "thinking" or "reasoning" AI models, such as Gemini, DeepSeek's R1, and OpenAI's o1, may be surpassed by experience agents. The problem with reasoning agents is that they "imitate" human language when they produce verbose output about steps to an answer, and human thought can be limited by its embedded assumptions.


Understanding API Security: Insights from GoDaddy’s FTC Settlement

The FTC’s action against GoDaddy stemmed from the company’s inadequate security practices, which led to multiple data breaches from 2019 to 2022. These breaches exposed sensitive customer data, including usernames, passwords, and employee credentials. ... GoDaddy did not implement multi-factor authentication (MFA) and encryption, leaving customer data vulnerable. Without MFA and robust checks against credential stuffing, attackers could easily exploit stolen or weak credentials to access user accounts. Even with authentication, attackers can abuse authenticated sessions if the underlying API authorization is flawed. ... The absence of rate-limiting, logging, and anomaly detection allowed unauthorized access to 1.2 million customer records. More critically, this lack of deep inspection meant an inability to baseline normal API behavior and detect subtle reconnaissance or the exploitation of unique business logic flaws – attacks that often bypass traditional signature-based tools. ... Inadequate Access Controls: The exposure of admin credentials and encryption keys enabled attackers to compromise websites. Strong access controls are essential to restrict access to sensitive information to authorized personnel only. This highlights the risk not just of credential theft, but of authorization flaws within APIs themselves, where authenticated users gain access to data they shouldn’t.

Daily Tech Digest - April 18, 2025


Quote for the day:

“Failures are finger posts on the road to achievement.” -- C.S. Lewis



How to Use Passive DNS To Trace Hackers Command And Control Infrastructure

This technology works through a network of sensors that monitor DNS query-response pairs, forwarding this information to central collection points for analysis without disrupting normal network operations. The resulting historical databases contain billions of unique records that security analysts can query to understand how domain names have resolved over time. ... When investigating potential threats, analysts can review months or even years of DNS resolution data without alerting adversaries to their investigation—a critical advantage when dealing with sophisticated threat actors. ... The true power of passive DNS in C2 investigation comes through various pivoting techniques that allow analysts to expand from a single indicator to map entire attack infrastructures. These techniques leverage the interconnected nature of DNS to reveal relationships between seemingly disparate domains and IP addresses. IP-based pivoting represents one of the most effective approaches. Starting with a known malicious IP address, analysts can query passive DNS to identify all domains that have historically resolved to that address. This technique often reveals additional malicious domains that share infrastructure but might otherwise appear unrelated.


Why digital identity is the cornerstone of trust in modern business

The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect. ... Digital identity is also a driver of customer experience. In today’s hyper-competitive digital landscape, the sign-up process can make or break a brand relationship. Clunky login screens or repeated verification prompts are quick ways to lose a customer. ... The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect.


Is your business ready for the IDP revolution?

AI-powered document processing offers significant advantages. Using advanced ML, IDP systems accurately interpret even complex and low-quality documents, including those with intricate tables and varying formats. This reduces manual work and the risk of human error. ... IDP also significantly improves data quality and accuracy by eliminating manual data entry, ensuring critical information is captured correctly and consistently. This leads to better decision-making, regulatory compliance and increased efficiency. IDP has wide-ranging applications. In healthcare, it speeds up claims processing and improves patient data management. In finance, it automates invoice processing and streamlines loan applications. In legal, it assists with contract analysis and due diligence. And in insurance, IDP automates information extraction from claims and reports, accelerating processing and boosting customer satisfaction. One specific example of this innovation in action is DocuWare’s own Intelligent Document Processing (DocuWare IDP). Our AI-powered solution streamlines how businesses handle even the most complex documents. Available as a standalone product, in the DocuWare Cloud or on-premises, DocuWare IDP automates text recognition, document classification and data extraction from various document types, including invoices, contracts and ID cards.


Practical Strategies to Overcome Cyber Security Compliance Standards Fatigue

The suitability of a cyber security framework must be determined based on applicable laws, industry standards, organizational risk profile, business goals, and resource constraints. It goes without saying that organizations providing critical services to the USA federal government will pursue NIST compliance while Small and Medium-sized Enterprises (SMEs) may want to focus on CIS Top 20, given resource constraints. Once the cyber security team has selected the most suitable framework, they should seek endorsement from the executive team or cyber risk governance committee to ensure shared sense of purpose. ... Mapping will enable organizations to identify overlapping controls to create a unified control set that addresses the requirements of multiple frameworks. This way, the organization can avoid redundant controls and processes, which in turn reduces cyber security team fatigue, accelerates innovation and lowers the cost of security. ... Cyber compliance standards play an integral role to ensure organizations prioritize the protection of consumer confidential and sensitive information above profits. But to reduce pressure on cyber teams already battling stress, cyber leaders must take a pragmatic approach that carefully balances compliance with innovation, agility and efficiency.


The Elaboration of a Modern TOGAF Architecture Maturity Model

This innovative TOGAF architecture maturity model provides a structured framework for assessing and enhancing an organization’s enterprise architecture capabilities in organizations that need to become more agile. By defining maturity levels across ten critical domains, the model enables organizations to transition from unstructured, reactive practices to well-governed, data-driven, and continuously optimized architectural processes. The five maturity levels—Initial, Under Development, Defined, Managed, and Measured—offer a clear roadmap for organizations to integrate EA into strategic decision-making, align business and IT investments, and establish governance frameworks that enhance operational efficiency. Through this approach, EA evolves from a support function into a key driver of innovation and business transformation. This model emphasizes continuous improvement and strategic alignment, ensuring that EA not only supports but actively contributes to an organization’s long-term success. By embedding EA into business strategy, security, governance, and solution delivery, enterprises can enhance agility, mitigate risks, and drive competitive advantage. Measuring EA’s impact through financial metrics and performance indicators further ensures that architecture initiatives provide tangible business value. 


Securing digital products under the Cyber Resilience Act

CRA explicitly states that products should have appropriate level of cybersecurity based on the risks, the risk based approach is fundamental in the regulation. This has the advantage that we can set the bar wherever we want as long as we make a good risk based argumentation for this level. This implies that we must have a methodical categorization of risk, hence we need application risk profiles. In order to implement this we can follow the quality criteria of maturity level 1, 2 and 3 of the application risk profiles practice. This includes having a clearly agreed upon, understood, accessible and updated risk classification system. ... Many companies already have SAMM assessments, if you do not have SAMM assessments but use another maturity framework such as OWASP DSOMM or NIST CSF you could use the available mappings to accelerate the translation to SAMM. Otherwise we recommend doing SAMM assessments and identifying the gaps in the processes needed. Then deciding on a roadmap to develop the processes and capabilities in time. ... In CRA we need to demonstrate that we have adequate security processes in place, and that we do not ship products with known vulnerabilities. So apart from having a good picture of the data flows we need to have a good picture of the processes in place.


Insider Threats, AI and Social Engineering: The Triad of Modern Cybersecurity Threats

Insiders who are targeted or influenced by external adversaries to commit data theft may not be addressed by traditional security solutions because attackers might use a combination of manipulation techniques with tactics to get access to the confidential data of an organization.  This can be seen in the case of Insider Threats carried out by Famous Chollima, a cyber-criminal group that targeted organizations through the employees, that were working for the criminal group. This criminal group collected individuals, falsified their identities, and helped them secure employment with the organization. Once inside, the group got access to sensitive information through the employees they helped get into the organization. ... Since AI can mimic user behavior, it is hard for security teams to detect the difference between normal activity and AI-generated activity. AI can also be used by insiders to assist in their plans, such as like an insider could use AI or train AI models to analyze user activity and pinpoint the window of least activity to deploy malware onto a critical system at an optimal time and disguise this activity under a legitimate action, to avoid detection with monitoring solutions.


How Successful Leaders Get More Done in Less Time

In order to be successful, leaders must make a conscious shift to move from reactive to intentional. They must guard their calendars, build in time for deep work, and set clear boundaries to focus on what truly drives progress. ... Time-blocking is one of the simplest, most powerful tools a leader can use. At its core, time-blocking is the practice of assigning specific blocks of time to different types of work: deep focus, meetings, admin, creative thinking or even rest. Why does it work? Because it eliminates context-switching, which is the silent killer of productivity. Instead of bouncing between tasks and losing momentum, time-blocking gives your day structure. It creates rhythm and ensures that what matters most actually gets done. ... Not everything on your to-do list matters. But without a clear system to prioritize, everything feels urgent. That's how leaders end up spending hours on reactive work while their most impactful tasks get pushed to "tomorrow." The fix? Use prioritization frameworks like the 80/20 rule (20% of tasks drive 80% of results) to stay focused on what actually moves the needle. ... If you're still doing everything yourself, there's a chance you're creating a bottleneck. The best leaders know that delegation buys back time and creates opportunities for others to grow. 


The tech backbone creating the future of infrastructure

Governments and administrators around the world are rapidly realizing the benefits of integrated infrastructure. A prime example is the growing trend for connecting utilities across borders to streamline operations and enhance efficiency. The Federal-State Modern Grid Deployment Initiative, involving 21 US states, is a major step towards modernizing the power grid, boosting reliability and enhancing resource management. Across the Atlantic, the EU is linking energy systems; by 2030, each member nation should be sharing at least 15% of its electricity production with its neighbors. On a smaller scale, the World Economic Forum is encouraging industrial clusters—including in China, Indonesia, Ohio and Australia—to share resources, infrastructure and risks to maximize economic and environmental value en route to net zero. ... Data is a nation’s most valuable asset. It is now being collected from multiple infrastructure points—traffic, energy grids, utilities. Infusing it with artificial intelligence (AI) in the cloud enables businesses to optimize their operations in real time. Centralizing this information, such as in an integrated command-and-control center, facilitates smoother collaboration and closer interaction among different sectors. 


No matter how advanced the technology is, it can all fall apart without strong security

One cybersecurity trend that truly excites me is the convergence of Artificial Intelligence (AI) with cybersecurity, especially in the areas of threat detection, incident response, and predictive risk management. This has motivated me to pursue a PhD in Cybersecurity using AI. Unlike traditional rule-based systems, AI is revolutionising cybersecurity by enabling proactive and adaptive defence strategies through contextual intelligence, shifting the focus from reactive to proactive measures. ... The real magic lies in combining AI with human judgement — what I often refer to as “human-in-the-loop cybersecurity.” This balance allows teams to scale faster, stay sharp, and focus on strategic defence instead of chasing every alert manually. What I have learnt from all this is the fusion of AI and cybersecurity is not just an enhancement, it’s a paradigm shift. However, the key is achieving balance. Hence, AI should augment human intelligence, rather than supplant them.... In the realm of financial cybersecurity, the most significant risk isn’t solely technical; it stems from the gap between security measures and business objectives. As the CISO, my responsibility extends beyond merely protecting against threats; I aim to integrate cybersecurity into the core of the organisation, transforming it into a strategic enabler rather than a reactive measure.

Daily Tech Digest - April 17, 2025


Quote for the day:

"We are only as effective as our people's perception of us." -- Danny Cox



Why data literacy is essential - and elusive - for business leaders in the AI age

The rising importance of data-driven decision-making is clear but elusive. However, the trust in the data underpinning these decisions is falling. Business leaders do not feel equipped to find, analyze, and interpret the data they need in an increasingly competitive business environment. The added complexity is the convergence of macro and micro uncertainties -- including economic, political, financial, technological, competitive landscape, and talent shortage variables.  ... The business need for greater adoption of AI capabilities, including predictive, generative and agentic AI solutions, is increasing the need for businesses to have confidence and trust in their data. Survey results show that higher adoption of AI will require stronger data literacy and access to trustworthy data. ... The alarming part of the survey is that 54% of business leaders are not confident in their ability to find, analyze, and interpret data on their own. And fewer than half of business leaders are sure they can use data to drive action and decision-making, generate and deliver timely insights, or effectively use data in their day-to-day work. Data literacy and confidence in the data are two growth opportunities for business leaders across all lines of business.


Cyber threats against energy sector surge as global tensions mount

These cyber-espionage campaigns are primarily driven by geopolitical considerations, as tensions shaped by the Russo-Ukraine war, the Gaza conflict, and the U.S.’ “great power struggle” with China are projected into cyberspace. With hostilities rising, potentially edging toward a third world war, rival nations are attempting to demonstrate their cyber-military capabilities by penetrating Western and Western-allied critical infrastructure networks. Fortunately, these nation-state campaigns have overwhelmingly been limited to espionage, as opposed to Stuxnet-style attacks intended to cause harm in the physical realm. A secondary driver of increasing cyberattacks against energy targets is technological transformation, marked by cloud adoption, which has largely mediated the growing convergence of IT and OT networks. OT-IT convergence across critical infrastructure sectors has thus made networked industrial Internet of Things (IIoT) appliances and systems more penetrable to threat actors. Specifically, researchers have observed that adversaries are using compromised IT environments as staging points to move laterally into OT networks. Compromising OT can be particularly lucrative for ransomware actors, because this type of attack enables adversaries to physically paralyze energy production operations, empowering them with the leverage needed to command higher ransom sums. 


The Active Data Architecture Era Is Here, Dresner Says

“The buildout of an active data architecture approach to accessing, combining, and preparing data speaks to a degree of maturity and sophistication in leveraging data as a strategic asset,” Dresner Advisory Services writes in the report. “It is not surprising, then, that respondents who rate their BI initiatives as a success place a much higher relative importance on active data architecture concepts compared with those organizations that are less successful.” Data integration is a major component of an active data architecture, but there are different ways that users can implement data integration. According to Dresner, the majority of active data architecture practitioners are utilizing batch and bulk data integration tools, such as ETL/ELT offerings. Fewer organizations are utilizing data virtualization as the primary data integration method, or real-time event streaming (i.e. Apache Kafka) or message-based data movement (i.e. RabbitMQ). Data catalogs and metadata management are important aspects of an active data architecture. “The diverse, distributed, connected, and dynamic nature of active data architecture requires capabilities to collect, understand, and leverage metadata describing relevant data sources, models, metrics, governance rules, and more,” Dresner writes. 


How can businesses solve the AI engineering talent gap?

“It is unclear whether nationalistic tendencies will encourage experts to remain in their home countries. Preferences may not only be impacted by compensation levels, but also by international attention to recent US treatment of immigrants and guests, as well as controversy at academic institutions,” says Bhattacharyya. But businesses can mitigate this global uncertainty, to some extent, by casting their hiring net wider to include remote working. Indeed, Thomas Mackenbrock, CEO-designate of Paris headquartered BPO giant Teleperformance says that the company’s global footprint helps it to fulfil AI skills demand. “We’re not reliant on any single market [for skills] as we are present in almost 100 markets,” explains Mackenbrock. ... “The future workforce will need to combine human ingenuity with new and emerging AI technologies; going beyond just technical skills alone,” says Khaled Benkrid, senior director of education and research at Arm. “Academic institutions play a pivotal role in shaping this future workforce. By collaborating with industry to conduct research and integrate AI into their curricula, they ensure that graduates possess the skills required by the industry. “Such collaborations with industry partners keep academic programs aligned with research frontiers and evolving job market demands, creating a seamless transition for students entering the workforce,” says Benkrid.


Breaking Down the Walls Between IT and OT

“Even though there's cyber on both sides, they are fundamentally different in concept,” Ian Bramson, vice president of global industrial cybersecurity at Black & Veatch, an engineering, procurement, consulting, and construction company, tells InformationWeek. “It's one of the things that have kept them more apart traditionally.” ... “OT is looked at as having a much longer lifespan, 30 to 50 years in some cases. An IT asset, the typical laptop these days that's issued to an individual in a company, three years is about when most organization start to think about issuing a replacement,” says Chris Hallenbeck, CISO for the Americas at endpoint management company Tanium. ... The skillsets required of the teams to operate IT and OT systems are also quite different. On one side, you likely have people skilled in traditional systems engineering. They may have no idea how to manage the programmable logic controllers (PLC) commonly used in OT systems. The divide between IT and OT has been, in some ways, purposeful. The Purdue model, for example, provides a framework for segmenting ICS networks, keeping them separate from corporate networks and the internet. ... Cyberattack vectors on IT and OT environments look different and result in different consequences. “On the IT side, the impact is primarily data loss and all of the second order effects of your data getting stolen or your data getting held for ransom,” says Shankar. 


Are Return on Equity and Value Creation New Metrics for CIOs?

While driving efficiency is not a new concept for technology leaders, what is different today is the scale and significance of their efforts. In many organizations, CIOs are being tasked with reimagining how value is generated, assessed and delivered. ... Traditionally, technology ROI discussions have focused on cost savings, automation consolidation and reduced headcount. But that perspective is shifting rapidly. CIOs are now prioritizing customer acquisition, retention pricing power and speed to market. CIOs also play an integral role in product innovation than ever before. To remain relevant, they must speak the language of gross margin, not just uptime. This evolution is increasingly reflected in boardroom conversations. CIOs once presented dashboards of uptime and service-level agreements, but today, they discuss customer value, operational efficiency and platform monetization. ... In some cases, technology leaders scale too quickly before proving value. For example, expensive cloud migrations may proceed without a corresponding shift in the business model. This can result in data lakes with no clear application or platforms launched without product-market fit. These missteps can severely undermine ROE. 


AI brings order to observability disorder

Artificial intelligence has contributed to complexity. Businesses now want to monitor large language models as well as applications to spot anomalies that may contribute to inaccuracies, bias, and slow performance. Legacy observability systems were never designed for the ability to bring together these disparate sources of data. A unified observability platform leveraging AI can radically simplify the tools and processes for improved visibility and resolving problems faster, enabling the business to optimize operations based on reliable insights. By consolidating on one set of integrated observability solutions, organizations can lower costs, simplify complex processes, and enable better cross-function collaboration. “Noise overwhelms site reliability engineering teams,” says Gagan Singh, Vice President of Product Marketing at Elastic. Irrelevant and low-priority alerts can overwhelm engineers, leading them to overlook critical issues and delaying incident response. Machine learning models are ideally suited to categorizing anomalies and surfacing relevant alerts so engineers can focus on critical performance and availability issues. “We can now leverage GenAI to enable SREs to surface insights more effectively,” Singh says.


Why Most IaC Strategies Still Fail — And How To Fix Them

There are a few common reasons IaC strategies fail in practice. Let’s explore what they are, and dive into some practical, battle-tested fixes to help teams regain control, improve consistency and deliver on the original promise of IaC. ... Without a unified direction, fragmentation sets in. Teams often get locked into incompatible tooling — some using AWS CloudFormation for perceived enterprise alignment, others favoring Terraform for its flexibility. These tool silos quickly become barriers to collaboration. ... Resistance to change also plays a role. Some engineers may prefer to stick with familiar interfaces and manual operations, viewing IaC as an unnecessary complication. Meanwhile, other teams might be fully invested in reusable modules and automated pipelines, leading to fractured workflows and collaboration breakdowns. Successful IaC implementation requires building skills, bridging silos and addressing resistance with empathy and training — not just tooling. To close the gap, teams need clear onboarding plans, shared coding standards and champions who can guide others through real-world usage — not just theory. ... Drift is inevitable: manual changes, rushed fixes and one-off permissions often leave code and reality out of sync. Without visibility into those deviations, troubleshooting becomes guesswork. 


What will the sustainable data center of the future look like?

The energy issue not only affects operators/suppliers. If a customer uses a lot of energy, they will get a bill to match, says Van den Bosch. “I [as a supplier] have to provide the customer with all kinds of details about my infrastructure. That includes everything from air conditioning to the specific energy consumption of the server racks. The customer is then able to reduce that energy consumption.” This can be done, for example, by replacing servers earlier than they have been before, a departure from the upgrade cycles of yesteryear. Ruud Mulder of Dell Technologies calls for the sustainability of equipment to be made measurable in great detail. This can be done by means of a digital passport, showing where all the materials come from and how recyclable they are. He thinks there is still much room for improvement in this area. For example, future designs can be recycled better by separating plastic and gold from each other, refurbishing components and more. This yield increase is often attractive, as more computing power is required for ambitious AI plans, and the efficiency of chips increases with each generation. “The transition to AI means that you sometimes have to say goodbye to your equipment sooner,” says Mulder. The AI issue is highly relevant to the future of the modern data center in any case. 


Fitness Functions for Your Architecture

Fitness functions offer us self-defined guardrails for certain aspects of our architecture. If we stay within certain (self-chosen) ranges, we're safe (our architecture is "good"). ... Many projects already use some kinds of fitness functions, although they might not use the term. For example, metrics from static code checkers, linters, and verification tools (such as PMD, FindBugs/SpotBugs, ESLint, SonarQube, and many more). Collecting the metrics alone doesn't make it a fitness function, though. You'll need fast feedback for your developers, and you need to define clear measures: limits or ranges for tolerated violations and actions to take if a metric indicates a violation. In software architecture, we have certain architectural styles and patterns to structure our code in order to improve understandability, maintainability, replaceability, and so on. Maybe the most well-known pattern is a layered architecture with, quite often, a front-end layer above a back-end layer. To take advantage of such layering, we'll allow and disallow certain dependencies between the layers. Usually, dependencies are allowed from top to down, i.e. from the front end to the back end, but not the other way around. A fitness function for a layered architecture will analyze the code to find all dependencies between the front end and the back end.

Daily Tech Digest - April 16, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How to lead humans in the age of AI

Quiet the noise around AI and you will find the simple truth that the most crucial workplace capabilities remain deeply human. ... This human skills gap is even more urgent when Gen Z is factored in. They entered the workforce aligned with a shift to remote and hybrid environments, resulting in fewer opportunities to hone interpersonal skills through real-life interactions. This is not a critique of an entire generation, but rather an acknowledgment of a broad workplace challenge. And Gen Z is not alone in needing to strengthen communication across generational divides, but that is a topic for another day. ... Leaders must embrace their inner improviser. Yes, improvisation, like what you have watched on Whose Line Is It Anyway? Or the awkward performance your college roommate invited you to in that obscure college lounge. The skills of an improviser are a proven method for striving amidst uncertainty. Decades of experience at Second City Works and studies published by The Behavioral Scientist confirm the principles of improv equip us to handle change with agility, empathy, and resilience. ... Make listening intentional and visible. Respond with the phrase, “So what I’m hearing is,” followed by paraphrasing what you heard. Pose thoughtful questions that indicate your priority is understanding, not just replying. 


When companies merge, so do their cyber threats

Merging two companies means merging two security cultures. That is often harder than unifying tools or policies. While the technical side of post-M&A integration is important, it’s the human and procedural elements that often introduce the biggest risks. “When CloudSploit was acquired, one of the most underestimated challenges wasn’t technical, it was cultural,” said Josh Rosenthal, Holistic Customer Success Executive at REPlexus.com. “Connecting two companies securely is incredibly complex, even when the acquired company is much smaller.” Too often, the focus in M&A deals lands on surface-level assurances like SOC 2 certifications or recent penetration tests. While important, those are “table stakes,” Rosenthal noted. “They help, but they don’t address the real friction: mismatched security practices, vendor policies, and team behaviors. That’s where M&A cybersecurity risk really lives.” As AI accelerates the speed and scale of attacks, CISOs are under increasing pressure to ensure seamless integration. “Even a phishing attack targeting a vendor onboarding platform can introduce major vulnerabilities during the M&A process,” Rosenthal warned. To stay ahead of these risks, he said, smart security leaders need to dig deeper than documentation.


Measuring success in dataops, data governance, and data security

If you are on a data governance or security team, consider the metrics that CIOs, chief information security officers (CISOs), and chief data officers (CDOs) will consider when prioritizing investments and the types of initiatives to focus on. Amer Deeba, GVP of Proofpoint DSPM Group, says CIOs need to understand what percentage of their data is valuable or sensitive and quantify its importance to the business—whether it supports revenue, compliance, or innovation. “Metrics like time-to-insight, ROI from tools, cost savings from eliminating unused shadow data, or percentage of tools reducing data incidents are all good examples of metrics that tie back to clear value,” says Deeba. ... Dataops technical strategies include data pipelines to move data, data streaming for real-time data sources like IoT, and in-pipeline data quality automations. Using the reliability of water pipelines as an analogy is useful because no one wants pipeline blockages, leaky pipes, pressure drops, or dirty water from their plumbing systems. “The effectiveness of dataops can be measured by tracking the pipeline success-to-failure ratio and the time spent on data preparation,” says Sunil Kalra, practice head of data engineering at LatentView. “Comparing planned deployments with unplanned deployments needed to address issues can also provide insights into process efficiency.”


How Safe Is the Code You Don’t Write? The Risks of Third-Party Software

Open-source and commercial packages and public libraries accelerate innovation, drive down development costs, and have become the invisible scaffolding of the Internet. GitHub recently highlighted that 99% of all software projects use third-party components. But with great reuse comes great risk. Third-party code is a double-edged sword. On the one hand, it’s indispensable. On the other hand, it’s a potential liability. In our race to deliver software faster, we’ve created sprawling software supply chains with thousands of dependencies, many of which receive little scrutiny after the initial deployment. These dependencies often pull in other dependencies, each one potentially introducing outdated, vulnerable, or even malicious code into environments that power business-critical operations. ... The risk is real, so what do we do? We can start by treating third-party code with the same caution and scrutiny we apply to everything else that enters the production pipeline. This includes maintaining a living inventory of all third-party components across every application and monitoring their status to prescreen updates and catch suspicious changes. With so many ways for threats to hide, we can’t take anything on trust, so next comes actively checking for outdated or vulnerable components as well as new vulnerabilities introduced by third-party code. 


The AI Leadership Crisis: Why Chief AI Officers Are Failing (And How To Fix It)

Perhaps the most dangerous challenge facing CAIOs is the profound disconnect between expectations and reality. Many boards anticipate immediate, transformative results from AI initiatives – the digital equivalent of demanding harvest without sowing. AI transformation isn't a sprint; it's a marathon with hurdles. Meaningful implementation requires persistent investment in data infrastructure, skills development, and organizational change management. Yet CAIOs often face arbitrary deadlines that are disconnected from these realities. One manufacturing company I worked with expected their newly appointed CAIO to deliver $50 million in AI-driven cost savings within 12 months. When those unrealistic targets weren't met, support for the role evaporated – despite significant progress in building foundational capabilities. ... There are many potential risks of AI, from bias to privacy concerns, and the right level of governance is essential. CAIOs are typically tasked with ensuring responsible AI use yet frequently lack the authority to enforce guidelines across departments. This accountability-without-authority dilemma places CAIOs in an impossible position. They're responsible for AI ethics and risk management, but departmental leaders can ignore their guidance with minimal consequences.


OT security: how AI is both a threat and a protector

Burying one’s head in the sand, a favorite pastime among some OT personnel, no longer works. Security through obscurity is and remains a bad idea. Heinemeyer: “I’m not saying that everyone will be hacked, but it is increasingly likely these days.” Possibly, the ostrich policy has to do with, yes, the reporting on OT vulnerabilities, including by yours truly. Ancient protocols, ICS systems and PLCs with exploitable vulnerabilities are evidently risk factors. However, the people responsible for maintaining these systems at manufacturing and utility facilities know better than anyone that the actual exploits of these obscure systems are improbable. ... Given the increasing threat, is the new focus on common best practices enough? We have already concluded that vulnerabilities should not be judged solely on the CVSS score. They are an indication, certainly, but a combination of CVEs with middle-of-the-range scoring appears to have the most serious consequences. Heinemeyer says that the resolve to identify all vulnerabilities as the ultimate solution was well established from the 1990s to the 2010s. He says that in recent years, security professionals have realized that specific issues need to be prioritized, quantifying technical exploitability through various measurements (e.g., EPSS). 


In a Social Engineering Showdown: AI Takes Red Teams to the Mat

In a revelation that shouldn’t surprise, but still should alarm security professionals, AI has gotten much more proficient in social engineering. Back in the day, AI was 31% less effective than human beings in creating simulated phishing campaigns. But now, new research from Hoxhunt suggests that the game-changing technology’s phishing performance against elite human red teams has improved by 55%. ... Using AI offensively can raise legal and regulatory hackles related to privacy laws and ethical standards, Soroko adds, as well as creating a dependency risk. “Over-reliance on AI could diminish human expertise and intuition within cybersecurity teams.” But that doesn’t mean bad actors will win the day or get the best of cyber defenders. Instead, security teams could and should turn the tables on them. “The same capabilities that make AI an effective phishing engine can — and must — be used to defend against it,” says Avist. With an emphasis on “must.” ... It seems that tried and true basics are a good place to start. “Ensuring transparency, accountability and responsible use of AI in offensive cybersecurity is crucial,” Kowski. As with any aspect of tech and security, keeping AI models “up-to-date with the latest threat intelligence and attack techniques is also crucial,” he says. “Balancing AI capabilities with human expertise remains a key challenge.”


Optimizing CI/CD for Trust, Observability and Developer Well-Being

While speed is often cited as a key metric for CI/CD pipelines, the quality and actionability of the feedback provided are equally, if not more, important for developers. Jones, emphasizing the need for deep observability, stresses, “Don’t just tell me that the steps of the pipeline succeeded or failed, quantify that success or failure. Show me metrics on test coverage and show me trends and performance-related details. I want to see stack traces when things fail. I want to be able to trace key systems even if they aren’t related to code that I’ve changed because we have large complex architectures that involve a lot of interconnected capabilities that all need to work together.” This level of technical insight empowers developers to understand and resolve issues quickly, highlighting the importance of implementing comprehensive monitoring and logging within your CI/CD pipeline to provide developers with detailed insights into build, test, and deployment processes. And shifting feedback earlier in the development lifecycle serves everyone well. The key is shifting feedback earlier in the process, ensuring it is contextual, before code is merged. For example, running security scans at the pull request stage, rather than after deployment, ensures developers get actionable feedback while still in context. 


AI agents vs. agentic AI: What do enterprises want?

If AI and AI agents are application components, then they fit both into business processes and workflow. A business process is a flow, and these days at least part of that flow is the set of data exchanges among applications or their components—what we typically call a “workflow.” It’s common to think of the process of threading workflows through both applications and workers as a process separate from the applications themselves. Remember the “enterprise service bus”? That’s still what most enterprises prefer for business processes that involve AI. Get an AI agent that does something, give it the output of some prior step, and let it then create output for the step beyond it. The decision as to whether an AI agent is then “autonomous” is really made by whether its output goes to a human for review or is simply accepted and implemented. ... What enterprises like about their vision of an AI agent is that it’s possible to introduce AI into a business process without having AI take over the process or require the process be reshaped to accommodate AI. Tech adoption has long favored strategies that let you limit scope of impact, to control both cost and the level of disruption the technology creates. This favors having AI integrated with current applications, which is why enterprises have always thought of AI improvements to their business operation overall as being linked to incorporating AI into business analytics.


Liquid Cooling is ideal today, essential tomorrow, says HPE CTO

We’re moving from standard consumption levels—like 1 kilowatt per rack—to as high as 3 kilowatts or more. The challenge lies in provisioning that much power and doing it sustainably. Some estimates suggest that data centers, which currently account for about 1% of global power consumption, could rise to 5% if trends continue. This is why sustainability isn’t just a checkbox anymore—it’s a moral imperative. I often ask our customers: Who do you think the world belongs to? Most pause and reflect. My view is that we’re simply renting the world from our grandchildren. That thought should shape how we design infrastructure today. ... Air cooling works until a point. But as components become denser, with more transistors per chip, air struggles. You’d need to run fans faster and use more chilled air to dissipate heat, which is energy-intensive. Liquid, due to its higher thermal conductivity and density, absorbs and transfers heat much more efficiently. Some DLC systems use cold plates only on select components. Others use them across the board. There are hybrid solutions too, combining liquid and air. But full DLC systems, like ours, eliminate the need for fans altogether. ... Direct liquid cooling (DLC) is becoming essential as data centers support AI and HPC workloads that demand high performance and density. 

Daily Tech Digest - April 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy



Critical Thinking In The Age Of AI-Generated Code

Besides understanding our code, code reviewing AI-generated code is an invaluable skill nowadays. Tools like GitHub's Copilot and DeepCode can code-review better than a junior software developer. Depending on the complexity of the codebase, they can save us time in code reviewing and pinpoint cases that we may have missed, but, after all, they are not flawless. We still need to verify that the AI assistant's code review did not provide any false positives or false negatives. We need to verify that the code review did not miss anything important and that the AI assistant got the context correctly. The hybrid approach seems to be the most effective one: let AI handle the grunt work and rely on developers for the critical analysis. ... After all, code reviewing AI-generated code is an excellent opportunity to educate ourselves while improving our code-reviewing skills. Keep in mind that, to date, AI-generated code optimizes for patterns in its training data. This may not be aligned with coding first principles. AI-generated code may follow templated solutions rather than custom designs. It may include unnecessary defensive code or overly generic implementations. We need to check that it has chosen the most appropriate solution for each code block generated. Another common problem is that LLMs may hallucinate.


DeepCoder: Revolutionizing Software Development with Open-Source AI

One of the DeepCoder project’s most significant contributions is the introduction of verl-pipeline, an optimized extension of the very open-source RLHF library. The team identified sampling, the generation of long token sequences as the primary bottleneck in training and developed “one-off pipelining” to address this challenge. This technique overlaps sampling, reward calculation and training, reducing end-to-end training times by up to 2.5x. This optimization is game-changing for coding tasks requiring thousands of unit tests per reinforcement learning iteration, making previously prohibitive training runs accessible to smaller research teams and independent developers. For DevOps professionals, DeepCoder represents an opportunity to integrate advanced code generation directly into CI/CD pipelines without dependency on API-gated services. Teams can fine-tune the model on their codebase, creating customized assistants that understand their specific architecture and coding patterns. ... DeepCoder’s open-source nature aligns with the DevOps collaboration and shared improvement philosophy. As more organizations adopt and contribute to the model, we can expect to see specialized versions emerge for different programming languages and problem domains.


Transforming Software Development

AI assistants are getting smarter, moving beyond prompt-based interactions to anticipate developers’ needs and proactively offer suggestions. This evolution is driven by the rise of AI agents, which can independently execute tasks, learn from their experiences and even collaborate with other agents. Next year, these agents will serve as a central hub for code assistance, streamlining the entire software development lifecycle. AI agents will autonomously write unit tests, refactor code for efficiency and even suggest architectural improvements. Developers’ roles will need to evolve alongside these advancements. AI will not replace them. Far from it; proactive AI assistants and their underlying agents will help developers build new skills and free up their time to focus on higher-value, more strategic tasks. ... AI models are more powerful when trained on internal company data, which allows them to generate insights specific to an organization’s unique operations and objectives. However, this often requires running models on premises for security and compliance reasons. With open source models rapidly closing the performance gap with commercial offerings, more businesses will deploy models on premises in 2025. This will allow organizations to fine-tune models with their own data and deploy AI applications at a fraction of the cost.


Cybercriminal groups embrace corporate structures to scale, sustain operations

We have seen cross collaboration between groups that specialize in specific activities. For example, one group specializes in social engineering, while another focuses on scaling malware and botnets to uncover open servers that yield database breaches. They, in turn, can sell access to those who focus on ransomware attacks. Recently, we have seen collaboration between AL/ML developers who scrape public records to build Org Charts, as well as lists of real estate holdings. This data is then used en masse with situational and location data to populate PDF attachments in emails that look like real invoices, with executives’ names in fake prior email responses, as part of the thread. ... the recent development in hackers organizing into larger groups has allowed the stakes to get even higher. Look at the Lazarus Group, who pulled off one of the largest heists ever by targeting Bybit and stealing $1.5 billion in Ethereum, as well as subsequently converting $300 million in unrecoverable funds. This group is likely state-sponsored and funding North Korean military programs. Therefore, understanding North Korean national interests will hint at future targets. The increasing scale of their attacks likely reflects greater resources allocated by North Korea, more sophisticated tooling and capabilities, lessons learned from previous operations, and a growing number of personnel trained in cyber operations.


Agentic AI might soon get into cryptocurrency trading — what could possibly go wrong?

Not everyone is bullish on the intersection of Web3, agentic AI and blockchain. Forrester Research vice president and principal analyst Martha Bennett is among those who are skeptical. In 2023, she co-authored an online post critical of Worldcoin, now the World project, and her opinion hasn’t changed in several regards. World project still faces major challenges, including privacy issues and concerns about its iris biometric technology, she said. And Agentic AI is still in its early stages and not yet capable of supporting Web3 transactions. Most current generative AI (genAI) tools, including LLMs, lack the autonomy defined as “agentic AI.” “There’s no AI technology today that would be able automate Web3 transactions in a reliable and secure manner,” she said. Given the risks and the potential for exploitation, it’s too soon to rely on AI systems with high autonomy for Web3 transactions. She did note, however, that Web3 already uses automation through smart contracts — self-executing electronic contracts with the terms of the agreement directly written into code. “Will Web3 go mainstream in 2025? My overall answer is no, but there are nuances,” she said. “If mainstream means mass consumer adoption, it’s a definite no. There’s simply not enough utility there for consumers.” Web3, Bennett said, is largely a self-contained financial ecosystem, and efforts to boost adoption through Decentralized Physical Infrastructure Networks (DePIN), such as Tools for Humanity’s, haven’t led to major breakthroughs.


Artificial Intelligence fuels rise of hard-to-detect bots 

“The surge in AI-driven bot creation has serious implications for businesses worldwide,” said Tim Chang, General Manager of Application Security at Thales. “As automated traffic accounts for more than half of all web activity, organisations face heightened risks from bad bots, which are becoming more prolific every day.” ... “This year’s report sheds light on the evolving tactics and techniques utilised by bot attackers. What were once deemed advanced evasion methods have now become standard practice for many malicious bots,” Chang said. “In this rapidly changing environment, businesses must evolve their strategies. It’s crucial to adopt an adaptive and proactive approach, leveraging sophisticated bot detection tools and comprehensive cybersecurity management solutions to build a resilient defense against the ever-shifting landscape of bot-related threats.” ... Analysis in the report reveals a deliberate strategy by cyber attackers to exploit API endpoints that manage sensitive and high-value data. Implications of this trend are especially impactful for industries that rely on APIs for their critical operations and transactions. Financial services, healthcare, and e-commerce sectors are bearing the brunt of these sophisticated bot attacks, making them prime targets for malicious actors seeking to breach sensitive information.


Humans at the helm of an AI-driven grid

A growing number of utilities are turning to AI-based tools to process vast data streams and streamline tasks once managed by manual calculation. For instance, algorithms can analyse weather patterns, historical consumption, and real-time sensor readings to make more accurate power demand and renewable energy generation forecasts. This supports more efficient balancing of supply and demand, reducing the likelihood of overloaded transformers or unexpected brownouts. Some utilities are also exploring AI-driven alarm management, which can filter the flood of alerts triggered by a network issue. Instead of operators sifting through hundreds of notifications, AI tools can be used to identify and highlight the most critical issues in real time. Another AI application is with congestion management, detecting trouble spots on the grid where demand might exceed capacity and even propose rerouting strategies to keep electricity flowing reliably. While still in their early stages, AI tools hold promise for driving operational efficiency in many daily scenarios. ... Even the smartest algorithm, however, lacks the broader perspective and accountability that people bring to grid management. Power and Utility companies are tasked with a public service mandate: they must ensure safety, affordability, and equitable access to electricity.


CISO Conversations: Maarten Van Horenbeeck, SVP & CSO at Adobe

The digital divide is simple to understand but complex to solve. Fundamentally, it separates those who have access to cyber and cyber knowledge from those who do not. There are areas of the world and socio-economic groups or demographics who have little or very limited access to the internet, and consequently very little awareness of cybersecurity. But cyber and cyber threats are worldwide; and technology is increasingly integrated and interconnected globally. “Cyber issues emanating from the digital divide don’t just play out far away from our homes – they play out very close to our homes as well,” warns Van Horenbeeck. “There’s a huge divide between people who know, for example, not to reuse passwords, to use multi factor authentication, and those individuals that have none of that experience at all.” In effect the digital divide creates a largely invisible and unseen threat surface for the long-connected world. He believes that technology companies can play a part in solving this problem by making cybersecurity features easy to understand and use. and cites two examples of the Adobe approach. “We invested, for example, in support for passkeys because we feel it’s a more effective and easier method of authentication that is also more secure.”


How AI, Robotics and Automation Transform Supply Chains

Enterprises designing robots to augment the human workforce need to take design thinking and ergonomic approaches into consideration. Designers must think about how robots comprehend and understand their physical surroundings without tripping over cables or objects on the floor, obstructing movement or causing human injuries. These robots are created with the aim to collaborate with humans for repetitive tasks and lift heavy loads. Last year, OT.today featured stories on how humanoid robots augmented the human workforce at Amazon, Mercedes, NASA and the Piaggio Group. In 2017, Alibaba invested in AI labs and the DAMO Academy. At its flagship Computing Conference in 2018, held in Hangzhou, China, Alibaba showcased a range of robots designed for warehouses, autonomous deliveries and other sectors, including hospitality and pharmaceuticals. More recently, Alibaba invested in LimX Dynamics, a company specializing in humanoid and robotic technology. Japanese automobile manufacturers have been using industrial robots since the early 1980s. Chip manufacturing companies in Taiwan and other countries also use them. Robots assist in surgeries in the healthcare sector. But none of those early manufacturing robots resembled humanoids or even had advanced AI seen in today's robots.


CIOs are overspending on the cloud — but still think it’s worth it

CIOs should also embrace DevOps practices tied to cost reduction when consuming cloud resources, Sellers says. One pitfall that doesn’t get enough attention: Many organizations don’t educate developers on the cost of cloud services, despite the glut of developer services large cloud providers make trivial to call. “I’ve lost track of how many services Amazon provides that developers can just use, and some of those can be quite expensive, but a developer doesn’t really know that,” Sellers says. “They’re like, ‘Instead of writing my own solution to this, I can just call this service that Amazon already provides, and boom, my job is done.’” The disconnect between developers and financial factors in the cloud is a real problem that leads to increased cloud costs, adds Nick Durkin, field CTO at Harness, provider of an AI-driven software development platform. Without knowing the costs of accessing a cloud-based GPU or CPU, for example, a developer is like a home builder who doesn’t know the cost of wood or brick, Durkin says. “If you’re not giving your smartest engineers access to the information about services that they can optimize on, how would you expect them to do it?” he says. “Then, finance comes back a month later with a beating stick.”