Daily Tech Digest - June 24, 2025


Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal


Why Agentic AI Is a Developer's New Ally, Not Adversary

Because agentic AI can complete complex workflows rather than simply generating content, it opens the door to a variety of AI-assisted use cases in software development that extend far beyond writing code — which, to date, has been the main way that software developers have leveraged AI. ... But agentic AI eliminates the need to spell out instructions or carry out manual actions entirely. With just a sentence or two, developers can prompt AI to perform complex, multi-step tasks. It's important to note that, for the most part, agentic AI use cases like those described above remain theoretical. Agentic AI remains a fairly new and quickly evolving field. The technology to do the sorts of things mentioned here theoretically exists, but existing tool sets for enabling specific agentic AI use cases are limited. ... It's also important to note that agentic AI poses new challenges for software developers. One is the risk that AI will make the wrong decisions. Like any LLM-based technology, AI agents can hallucinate, causing them to perform in undesirable ways. For this reason, it's tough to imagine entrusting high-stakes tasks to AI agents without requiring a human to supervise and validate them. Agentic AI also poses security risks. If agentic AI systems are compromised by threat actors, any tools or data that AI agents can access (such as source code) could also be exposed.


Modernizing Identity Security Beyond MFA

The next phase of identity security must focus on phishing-resistant authentication, seamless access, and decentralized identity management. The key principle guiding this transformation is a principle of phishing resistance by design. The adoption of FIDO2 and WebAuthn standards enables passwordless authentication using cryptographic key pairs. Because the private key never leaves the user’s device, attackers cannot intercept it. These methods eliminate the weakest link — human error — by ensuring that authentication remains secure even if users unknowingly interact with malicious links or phishing campaigns. ... By leveraging blockchain-based verified credentials — digitally signed, tamper-evident credentials issued by a trusted entity — wallets enable users to securely authenticate to multiple resources without exposing their personal data to third parties. These credentials can include identity proofs, such as government-issued IDs, employment verification, or certifications, which enable strong authentication. Using them for authentication reduces the risk of identity theft while improving privacy. Modern authentication must allow users to register once and reuse their credentials seamlessly across services. This concept reduces redundant onboarding processes and minimizes the need for multiple authentication methods. 


The Pros and Cons of Becoming a Government CIO

Seeking a job as a government CIO offers a chance to make a real impact on the lives of citizens, says Aparna Achanta, security architect and leader at IBM Consulting -- Federal. CIOs typically lead a wide range of projects, such as upgrading systems in education, public safety, healthcare, and other areas that provide critical public services. "They [government CIOs] work on large-scale projects that benefit communities beyond profits, which can be very rewarding and impactful," Achanta observed in an online interview. "The job also gives you an opportunity for leadership growth and the chance to work with a wide range of departments and people." ... "Being a government CIO might mean dealing with slow processes and bureaucracy," Achanta says. "Most of the time, decisions take longer because they have to go through several layers of approval, which can delay projects.” Government CIOs face unique challenges, including budget constraints, a constantly evolving mission, and increased scrutiny from government leaders and the public. "Public servants must be adept at change management in order to be able to pivot and implement the priorities of their administration to the best of their ability," Tamburrino says. Government CIOs are often frustrated by a hierarchy that runs at a far slower pace than their enterprise counterparts.


Why work-life balance in cybersecurity must start with executive support

Watching your mental and physical health is critical. Setting boundaries is something that helps the entire team, not just as a cyber leader. One rule we have in my team is that we do not use work chat after business hours unless there are critical events. Everyone needs a break and sometimes hearing a text or chat notification can create undue stress. Another critical aspect of being a cybersecurity professional is to hold to your integrity. People often do not like the fact that we have to monitor, report, and investigate systems and human behavior. When we get pushback for this with unprofessional behavior or defensiveness, it can often cause great personal stress. ... Executive leadership plays one of the most critical roles in supporting the CISO. Without executive level support, we would be crushed by the demands and the frequent conflicts of interest we experience. For example, project managers, CIOs, and other IT leadership roles might prioritize budget, cost, timelines, or other needs above security. A security professional prioritizes people (safety) and security above cost or timelines. The nature of our roles requires executive leadership support to balance the security and privacy risk (and what is acceptable to an executive). I think in several instances the executive board and CEOs understand this, but we are still a growing profession and there needs to be more education in this area.


Building Trust in Synthetic Media Through Responsible AI Governance

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy. This creates a paradox: inaccurate labels may legitimize harmful media, while unlabelled content may appear trustworthy. Moreover, users may not view basic AI edits, such as color correction, as manipulation, while opinions differ on changes like facial adjustments or filters. It remains unclear whether simple colour changes require a label, or if labeling should only occur when media is substantively altered or generated using AI. Similarly, many synthetic media artifacts may not fit the standard definition of pornography, such as images showing white substances on a person’s face; however, they can often be humiliating. ... Second, synthetic media use cases exist on a spectrum, and the presence of mixed AI- and human-generated content adds complexity and uncertainty in moderation strategies. For example, when moderating human-generated media, social media platforms only need to identify and remove harmful material. In the case of synthetic media, it is often necessary to first determine whether the content is AI-generated and then assess its potential harm. This added complexity may lead platforms to adopt overly cautious approaches to avoid liability. These challenges can undermine the effectiveness of labeling.


How future-ready leadership can power business value

Leadership in 2025 requires more than expertise; it demands adaptability, compassion, and tech fluency. “Leadership today isn’t about having all the answers; it’s about creating an environment where teams can sense, interpret, and act with speed, autonomy, and purpose,” said Govind. As the learning journey of Conduent pivots from stabilization to growth, he shared that the leaders need to do two key things in the current scenario: be human-centric and be digitally fluent. Similarly, Srilatha highlighted a fundamental shift happening among the leaders: “Leaders today must lead with both compassion and courage while taking tough decisions with kindness.” She also underlined the rising importance of the three Rs in modern leadership: Reskilling, resilience, and rethinking. ... Govind pointed to something deceptively simple: acting on feedback. “We didn’t just collect feedback, we analyzed sentiment, made changes, and closed the loop. That made stakeholders feel heard.” This approach led Conduent to experiment with program duration, where they went from 12 to 8 to 6 months.’ “Learning is a continuum, not a one-off event,” Govind added. ... Leadership development is no longer optional or one-size-fits-all. It’s a business imperative—designed around human needs and powered by digital fluency.


The CISO’s 5-step guide to securing AI operations

As AI applications extend to third parties, CISOs will need tailored audits of third-party data, AI security controls, supply chain security, and so on. Security leaders must also pay attention to emerging and often changing AI regulations. The EU AI Act is the most comprehensive to date, emphasizing safety, transparency, non-discrimination, and environmental friendliness. Others, such as the Colorado Artificial Intelligence Act (CAIA), may change rapidly as consumer reaction, enterprise experience, and legal case law evolves. CISOs should anticipate other state, federal, regional, and industry regulations. ... Established secure software development lifecycles should be amended to cover things such as AI threat modeling, data handling, API security, etc. ... End user training should include acceptable use, data handling, misinformation, and deepfake training. Human risk management (HRM) solutions from vendors such as Mimecast may be necessary to keep up with AI threats and customize training to different individuals and roles. ... Simultaneously, security leaders should schedule roadmap meetings with leading security technology partners. Come to these meetings prepared to discuss specific needs rather than sit through pie-in-the-sky PowerPoint presentations. CISOs should also ask vendors directly about how AI will be used for existing technology tuning and optimization. 


State of Open Source Report Reveals Low Confidence in Big Data Management

"Many organizations know what data they are looking for and how they want to process it but lack the in-house expertise to manage the platform itself," said Matthew Weier O'Phinney, Principal Product Manager at Perforce OpenLogic. "This leads to some moving to commercial Big Data solutions, but those that can't afford that option may be forced to rely on less-experienced engineers. In which case, issues with data privacy, inability to scale, and cost overruns could materialize." ... EOL operating system, CentOS Linux, showed surprisingly high usage, with 40% of large enterprises still using it in production. While CentOS usage declined in Europe and North America in the past year, it is still the third most used Linux distribution overall (behind Ubuntu and Debian), and the top distribution in Asia. For teams deploying EOL CentOS, 83% cited security and compliance as their biggest concern around their deployments. ... "Open source is the engine driving innovation in Big Data, AI, and beyond—but adoption alone isn't enough," said Gael Blondelle, Chief Membership Officer of the Eclipse Foundation. "To unlock its full potential, organizations need to invest in their people, establish the right processes, and actively contribute to the long-term sustainability and growth of the technologies they depend on."


Cybercrime goes corporate: A trillion-dollar industry undermining global security

The CaaS market is a booming economy in the shadows, driving annual revenues into billions. While precise figures are elusive due to its illicit nature, reports suggest it's a substantial and growing market. CaaS contributes significantly, and the broader cybersecurity services market is projected to reach hundreds of billions of dollars in the coming years. If measured as a country, cybercrime would already be the world's third-largest economy, with projected annual damages reaching USD 10.5 trillion by 2025, as per some cybersecurity ventures. This growth is fueled by the same principles that drive legitimate businesses: specialisation, efficiency, and accessibility. CaaS platforms function much like dark online marketplaces. They offer pre-made hacking kits, phishing templates, and even access to already compromised computer networks. These services significantly lower the entry barrier for aspiring criminals. ... Enterprises must recognise that attackers often hit multiple systems simultaneously—computers, user identities, and cloud environments. This creates significant "noise" if security tools operate in isolation. Relying on many disparate security products makes it difficult to gain a holistic view and understand that seemingly separate incidents are often part of a single, coordinated attack.


Modern apps broke observability. Here’s how we fix it.

For developers, figuring out where things went wrong is difficult. In a survey looking at the biggest challenges to observability, 58% of developers said that identifying blind spots is a top concern. Stack traces may help, but they rarely provide enough context to diagnose issues quickly; developers chase down screenshots, reproduce problems, and piece together clues manually using the metric and log data from APM tools; a bug that could take 30 minutes to fix ends up consuming days or weeks. Meanwhile, telemetry data accumulates in massive volumes—expensive to store and hard to interpret. Without tools to turn data into insight, you’re left with three problems: high bills, burnout, and time wasted fixing bugs—bugs that don’t have a major impact on core business functions or drive revenue when increasing developer efficiency is a top strategic goal at organizations. ... More than anything, we need a cultural change. Observability must be built into products from the start. That means thinking early about how we’ll track adoption, usage, and outcomes—not just deliver features. Too often, teams ship functionality only to find no one is using it. Observability should show whether users ever saw the feature, where they dropped off, or what got in the way. That kind of visibility doesn’t come from backend logs alone.

Daily Tech Digest - June 23, 2025


Quote for the day:

"Sheep are always looking for a new shepherd when the terrain gets rocky." -- Karen Marie Moning


The 10 biggest issues IT faces today

“The AI explosion and how quickly it has come upon us is the top issue for me,” says Mark Sherwood, executive vice president and CIO of Wolters Kluwer, a global professional services and software firm. “In my experience, AI has changed and progressed faster than anything I’ve ever seen.” To keep up with that rapid evolution, Sherwood says he is focused on making innovation part of everyday work for his engineering team. ... “Modern digital platforms generate staggering volumes of telemetry, logs, and metrics across an increasingly complex and distributed architecture. Without intelligent systems, IT teams drown in alert fatigue or miss critical signals amid the noise,” he explains. “What was once a manageable rules-based monitoring challenge has evolved into a big data and machine learning problem.” He continues, saying, “This shift requires IT organizations to rethink how they ingest, manage, and act upon operational data. It’s not just about observability; it’s about interpretability and actionability at scale. ... CIOs today are also paying closer attention to geopolitical news and determining what it means for them, their IT departments, and their organizations. “These are uncertain times geopolitically, and CIOs are asking how that will affect IT portfolios and budgets and initiatives,” Squeo says.


Clouded judgement: Resilience, risk and the rise of repatriation

While the findings reflect growing concern, they also highlight a strategic shift, with 78% of leaders now considering digital sovereignty when selecting tech partners, and 68% saying they will only adopt AI services where they have full certainty over data ownership. For some, the answer is to take back control. Cloud repatriation is gaining some traction, at least in terms of mindset, but as yet, this is not translating into a mass exodus from the hyperscalers. And yet, calls for digital sovereignty are getting louder. In Europe, the Euro-Stack open letter has reignited the debate, urging policymakers to champion a competitive, sovereign digital infrastructure. But while politics might be a trigger, the key question is not whether businesses are abandoning cloud (most aren’t) but whether the balance of cloud usage is changing, driven as much by cost as performance needs and rising regulatory risks. ... “Despite access to cloud cost-optimisation teams, there was limited room to reduce expenses,” says Jonny Huxtable, CEO of LinkPool. After assessing bare-metal and colocation options, LinkPool decided to move fully to Pulsant’s colocation service. The company claims the move achieved a 90% to 95% cost reduction alongside major performance improvements and enhanced disaster recovery capabilities.


Cookie management under the Digital Personal Data Protection Act, 2023

Effective cookie management under the DPDP Act, as detailed in the BRDCMS, requires real time updates to user preferences. Users must have access to a dedicated cookie preferences interface that allows them to modify or revoke their consent without undue complexity or delay. This interface should be easily accessible, typically through privacy settings or a dedicated cookie management dashboard. The real-time nature of these updates is crucial for maintaining compliance with the principles of consent as enshrined under the DPDP Act. When a user withdraws consent for specific cookie categories, the system must immediately cease the collection and processing of data through those cookies, ensuring that the user’s privacy preferences are respected without delay. Transparency is one of the fundamental pillars of the DPDP Act and extends to cookie usage disclosure. While the DPDP Act itself remains silent on specific cookie policies, the BRDCMS mandates the provision of a clear and accessible cookie policy. Organisations must provide clear and accessible cookie policies which outline the purposes of cookie usage, the data sharing practices and the implications of different consent choices. The cookie policy serves as a comprehensive resource enabling users to make informed decisions of their consent preferences. 


AI agents win over professionals - but only to do their grunt work, Stanford study finds

According to the report, the majority of workers are ready to embrace agents for the automation of low-stakes and repetitive tasks, "even after reflecting on potential job loss concerns and work enjoyment." Respondents said they hoped to focus on more engaging and important tasks, mirroring what's become something of a marketing mantra among big tech companies pushing AI agents: that these systems will free workers and businesses from drudgery, so they can focus on more meaningful work. The authors also noted "critical mismatches" between the tasks that AI agents are being deployed to handle -- such as software development and business analysis -- and the tasks that workers are actually looking to automate. ... The study could have big implications for the future of human-AI collaboration in the workplace. Using a metric that they call the Human Agency Scale (HAS), the authors found "that workers generally prefer higher levels of human agency than what experts deem technologically necessary." ... The report further showed that the rise of AI automation is causing a shift in the human skills that are most valued in the workplace: information-processing and analysis skills, the authors said, are becoming less valuable as machines become increasingly competent in these domains, while interpersonal skills -- including "assisting and caring for others" -- is more important than ever.


New OLTP: Postgres With Separate Compute and Storage

The traditional methods for integrating databases are complex and not suited to AI, Xin said. The challenge lies in integrating analytics and AI with transactional workloads. Consider what developers would do when adding a feature to a code base, Xin said in his keynote address at the Data + AI Summit. They’d create a new branch of the codebase and make changes to the new branch. They’d use that branch to check bugs, perform testing and so on. Xin said creating a new branch is an instant operation. What’s the equivalent for databases? You only clone your production databases. It might take days. How do you set up secure networking? How do you create ETL pipelines and log data from one to another? ... Streaming is now a first-class citizen in the enterprise, Mohan told me. The separation of compute and storage makes a difference. We are approaching an era when applications will scale infinitely, both in terms of the number of instances and their scale-out capabilities. And that leads us to new questions about how we start to think about evaluation, observability and semantics. Accuracy matters. ... ADP may have the world’s best payroll data, Mohan said, but then that data has to be processed through ETL into an analytics solution like Databricks. Then comes the analytics and the data science work. The customer has to perform a significant amount of data engineering work and preparation.


Can AI Save Us from AI? The High-Stakes Race in Cybersecurity

Reluctant executives and budget hawks can shoulder some of the responsibility for slow AI adoption, but they’re hardly the only barriers. Increasingly, employees are voicing legitimate concerns about surveillance, privacy and the long-term impact of automation on job security. At the same time, enterprises may face structural issues when it comes to integration: fragmented systems, a lack of data inventory and access controls, and other legacy architectures can also hinder the secure integration and scalability of AI-driven security solutions. Meanwhile, bad actors face none of these considerations. They have immediate, unfettered access to open-source AI tools, which can enhance the speed and force of an attack. They operate without AI tool guardrails, governance, oversight or ethical constraints. ... Insider threat detection is also maturing. AI models can detect suspicious behavior, such as unusual access to data, privilege changes or timing inconsistencies, that may indicate a compromised account or insider threat. Early adopters, such as financial institutions, are using behavioral AI to flag synthetic identities by spotting subtle deviations that traditional tools often lack. They can also monitor behavioral intent signals, such as a worker researching resignation policies before initiating mass file downloads, providing early warnings of potential data exfiltration.


The complexities of satellite compute

“In cellular communications on the ground, this was solved a few decades ago. But doing it in space, you have to have the computing horsepower to do those handoffs as well as the throughput capability.” This additional compute needs to be in "a radiation tolerant form, and in such a way that they don't consume too much power and generate too much heat to cause massive thermal problems on the satellites." In LEO, satellites face a barrage of radiation. "It's an environment that's very rich in protons," O'Neill says. "And protons can cause upsets in configuration registers, they can even cause latch-ups in certain integrated circuits." The need to be more radiation tolerant has also pushed the industry towards newer hardware as, the smaller the process node, the lower the operating voltage. "Reducing operating voltage makes you less susceptible to destructive effects," O'Neill explains. One issue, a single event latch up, sees the satellite conduct a lot of current from power to ground through the integrated circuit, potentially frying it. ... Modern integrated circuits are a lot less susceptible to these single-event latch-ups, but are not completely immune. "While the core of the circuit may be operating at a very low voltage, 0.7 or 0.8 volts, you still have I/O circuits in the integrated circuit that may be required to interoperate with other ICs at 3.3 volts or 2.5 volts," O'Neill adds.


How CISOs can justify security investments in financial terms

A common challenge we see is the absence of a formal ERM program, or the fragmentation of risk functions, where enterprise, cybersecurity, and third-party risks are evaluated using different impact criteria. This lack of alignment makes it difficult for CISOs to communicate effectively with the C-suite and board. Standardizing risk programs and using consistent impact criteria enables clearer risk comparisons, shared understanding, and more strategic decision-making. This challenge is further exacerbated by the rise of AI-specific regulations and frameworks, including the NIST AI Risk Management Framework, the EU AI Act, the NYC Bias Audit Law, and the Colorado Artificial Intelligence Act. ... Communicating security investments in clear, business-aligned risk terms—such as High, Medium, or Low—using agreed-upon impact criteria like financial exposure, operational disruption, reputational harm, and customer impact makes it significantly easier to justify spending and align with enterprise priorities. ... In our Virtual CISO engagements, we’ve found that a risk-based, outcome-driven approach is highly effective with executive leadership. We frame cyber risk tolerance in financial and operational terms, quantify the business value of proposed investments, and tie security initiatives directly to strategic objectives. 


From fear to fluency: Why empathy is the missing ingredient in AI rollouts

In the past, teams had time to adapt to new technologies. Operating systems or enterprise resource planning (ERP) tools evolved over years, giving users more room to learn these platforms and acquire the skills to use them. Unlike previous tech shifts, this one with AI doesn’t come with a long runway. Change arrives overnight, and expectations follow just as fast. Many employees feel like they’re being asked to keep pace with systems they haven’t had time to learn, let alone trust. A recent example would be ChatGPT reaching 100 million monthly active users just two months after launch. ... This underlines the emotional and behavioral complexity of adoption. Some people are naturally curious and quick to experiment with new technology while others are skeptical, risk-averse or anxious about job security. ... Adopting AI is not just a technical initiative, it’s a cultural reset, one that challenges leaders to show up with more empathy and not just expertise. Success depends on how well leaders can inspire trust and empathy across their organizations. The 4 E’s of adoption offer more than a framework. They reflect a leadership mindset rooted in inclusion, clarity and care. By embedding empathy into structure and using metrics to illuminate progress rather than pressure outcomes, teams become more adaptable and resilient.


Why networks need AIOps and predictive analytics

Predictive Analytics – a key capability of AIOps – forecasts future network performance and problems, enabling early intervention and proactive maintenance. Further, early prediction of bottlenecks or additional requirements helps to optimise the management of network resources. For example, when organisations have advance warning about traffic surges, they can allocate capacity to prevent congestion and outages, and enhance overall network performance. A range of mundane tasks, from incident response to work order generation to network configuration to proactive IT health checks and maintenance scheduling, can be automated with AIOps to reduce the load on IT staff and free them up to concentrate on more strategic activities. ... When traditional monitoring tools were unable to identify bottlenecks in a healthcare provider’s network that was seeing a slowdown in its electronic health records (EHR) system during busy hours, a switch to AIOps resolved the problem. By enabling observability across domains, the system highlighted that performance dipped when users logged in during shift changes. It also predicted slowdowns half an hour in advance and automatically provisioned additional resources to handle the surge in activity. The result was a 70 percent reduction in the most important EHR slowdowns, improvement in system responsiveness, and freeing up of IT human resources.

Daily Tech Digest - June 21, 2025


Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins


AI in Disaster Recovery: Mapping Technical Capabilities to Real Business Value

Despite its promise, AI introduces new challenges, including security risks and trust deficits. Threat actors leverage the same AI advancements, targeting systems with more precision and, in some cases, undermining AI-driven defenses. In the Zerto–IDC survey mentioned earlier, for instance, only 41% of respondents felt that AI is “very” or “somewhat” trustworthy; 59% felt that it is “not very” or “not at all” trustworthy. To mitigate these risks, organizations must adopt AI responsibly. For example, combining AI-driven monitoring with robust encryption and frequent model validation ensures that AI systems deliver consistent and secure performance. Furthermore, organizations should emphasize transparency in AI operations to maintain trust among stakeholders. Successful AI deployment in DR/CR requires cross-functional alignment between ITOps and management. Misaligned priorities can delay response times during crises, exacerbating data loss and downtime. Additionally, the ongoing IT skills shortage is still very much underway, with a different recent IDC study predicting that 9 out of 10 organizations will feel an impact by 2026, at a cost of $5.5 trillion in potential delays, quality issues, and revenue loss across the economy. Integrating AI-driven automation can partially mitigate these impacts by optimizing resource allocation and reducing dependency on manual intervention.


The Quantum Supply Chain Risk: How Quantum Computing Will Disrupt Global Commerce

Whether its API’s, middleware, firmware embedded devices or operational technology, they’re all built on the same outdated encryption and systems of trust. One of the biggest threats from quantum computing will be on all this unseen machinery that powers global digital trade. These systems handle the backend of everything from routing to cargo to scheduling deliveries and clearing large shipments, but they were never designed to withstand the threat of quantum. Attackers will be able to break in quietly — injecting malicious code into control software, ERP systems or impersonating suppliers to communicate malicious information and hijack digital workflows. Quantum computing won’t necessarily affect the industries on its own, but it will corrupt the systems that power the global economy. ... Some of the most dangerous attacks are being staged today, with many nation-states and bad actors storing encrypted data, from procurement orders to shipping records. When quantum computers are finally able to break those encryption schemes, attackers will be able to decrypt them in what’s coined a Harvest Now Decrypt Later (HNDL) attack. These attacks, although retroactive in nature, represent one of the biggest threats to the integrity of cross-border commerce. Global trade depends on digital provenance or handling goods and proving where they came from. 


Securing OT Systems: The Limits of the Air Gap Approach

Aside from susceptibility to advanced techniques, tactics, and procedures (TTPs) such as thermal manipulation and magnetic fields, more common vulnerabilities associated with air-gapped environments include factors such as unpatched systems going unnoticed, lack of visibility into network traffic, potentially malicious devices coming on the network undetected, and removable media being physically connected within the network. Once the attack is inside OT systems, the consequences can be disastrous regardless of whether there is an air gap or not. However, it is worth considering how the existence of the air gap can affect the time-to-triage and remediation in the case of an incident. ... This incident reveals that even if a sensitive OT system has complete digital isolation, this robust air gap still cannot fully eliminate one of the greatest vulnerabilities of any system—human error. Human error would still hold if an organization went to the extreme of building a faraday cage to eliminate electromagnetic radiation. Air-gapped systems are still vulnerable to social engineering, which exploits human vulnerabilities, as seen in the tactics that Dragonfly and Energetic Bear used to trick suppliers, who then walked the infection right through the front door. Ideally, a technology would be able to identify an attack regardless of whether it is caused by a compromised supplier, radio signal, or electromagnetic emission. 


How to Lock Down the No-Code Supply Chain Attack Surface

A core feature of no-code development, third-party connectors allow applications to interact with cloud services, databases, and enterprise software. While these integrations boost efficiency, they also create new entry points for adversaries. ... Another emerging threat involves dependency confusion attacks, where adversaries exploit naming collisions between internal and public software packages. By publishing malicious packages to public repositories with the same names as internally used components, attackers could trick the platform into downloading and executing unauthorized code during automated workflow executions. This technique allows adversaries to silently insert malicious payloads into enterprise automation pipelines, often bypassing traditional security reviews. ... One of the most challenging elements of securing no-code environments is visibility. Security teams struggle with asset discovery and dependency tracking, particularly in environments where business users can create applications independently without IT oversight. Applications and automations built outside of IT governance may use unapproved connectors and expose sensitive data, since they often integrate with critical business workflows. 


Securing Your AI Model Supply Chain

Supply chain Levels for Software Artifacts (SLSA) is a comprehensive framework designed to protect the integrity of software artifacts, including AI models. SLSA provides a set of standards and practices to secure the software supply chain from source to deployment. By implementing SLSA, organizations can ensure that their AI models are built and maintained with the highest levels of security, reducing the risk of tampering and ensuring the authenticity of their outputs. ... Sigstore is an open-source project that aims to improve the security and integrity of software supply chains by providing a transparent and secure way to sign and verify software artifacts. Using cryptographic signatures, Sigstore ensures that AI models and other software components are authentic and have not been tampered with. This system allows developers and organizations to trace the provenance of their AI models, ensuring that they originate from trusted sources. ... The most valuable takeaway for ensuring model authenticity is the implementation of robust verification mechanisms. By utilizing frameworks like SLSA and tools like Sigstore, organizations can create a transparent and secure supply chain that guarantees the integrity of their AI models. This approach helps build trust with stakeholders and ensures that the models deployed in production are reliable and free from malicious alterations.


Data center retrofit strategies for AI workloads

AI accelerators are highly sensitive to power quality. Sub-cycle power fluctuations can cause bit errors, data corruption, or system instability. Older uninterruptible power supply (UPS) systems may struggle to handle the dynamic loads AI can produce, often involving three MW sub-cycle swings or more. Updating the electrical distribution system (EDS) is an opportunity that includes replacing dated UPS technology, which often cannot handle the dynamic AI load profile, redesigning power distribution for redundancy, and ensuring that power supply configurations meet the demands of high-density computing. ... With the high cost of AI downtime, risk mitigation becomes paramount. Energy and power management systems (EPMS) are capable of high-resolution waveform capture, which allows operators to trace and address electrical anomalies quickly. These systems are essential for identifying the root cause of power quality issues and coordinating fast response mechanisms. ... No two mission-critical facilities are the same regarding space, power, and cooling. Add the variables of each AI deployment, and what works for one facility may not be the best fit for another. That said, there are some universal truths about retrofitting for AI. You will need engineers who are well-versed in various equipment configurations, including cooling and electrical systems connected to the network. 


Is it time for a 'cloud reset'? New study claims public and private cloud balance is now a major consideration for companies across the world

Enterprises often still have some kind of a cloud-first policy, he outlined, but they have realized they need some form of private cloud too, typically due to the fact that some workloads do not meet the needs, mainly around cost, complexity and compliance. However the problem is that because public cloud has taken priority, infrastructure has not grown in the right way - so increasingly, Broadcom’s conversations are now with customers realizing they need to focus on both public and private cloud, and some on-prem, Baguley says, as they're realizing, “we need to make sure we do it right, we're doing it in a cost-effective way, and we do it in a way that's actually going to be strategically sensible for us going forward.” "In essence - they've realised they need to build something on-prem that can not only compete with public cloud, but actually be better in various categories, including cost, compliance and complexity.” ... In order to help with these concerns, Broadcom has released VMware Cloud Foundation (VCF) 9.0, the latest edition of its platform to help customers get the most out of private cloud. Described by Baguely as, “the culmination of 25 years work at VMware”, VCF 9.0 offers users a single platform with one SKU - giving them improved visibility while supporting all applications with a consistent experience across the private cloud environment.


Cloud in the age of AI: Six things to consider

This is an issue impacting many multinational organizations, driving the growth for regional- and even industry clouds. These offer specific tailored compliance, security, and performance options. As organizations try to architect infrastructure that supports their future states, with a blend of cloud and on-prem, data sovereignty is an increasingly large issue. I hear a lot from IT leaders about how they must consider local and regional regulations, which adds a consideration to the simple concept of migration to the cloud. ... Sustainability was always the hidden cost of connected computing. Hosting data in the cloud consumes a lot of energy. Financial cost is most top of mind when IT leaders talk about driving efficiency through the cloud right now. It’s also at the root of a lot of talk about moving to the edge and using AI-infused end user devices. But expect sustainability to become an increasingly important factor in cloud: geo political instability, the cost of energy, and the increasing demands of AI will see to that. ... The AI PC pitch from hardware vendors is that organizations will be able to build small ‘clouds’ of end user devices. Specific functions and roles will work on AI PCs and do their computing at the edge. The argument is compelling: better security and efficient modular scalability. Not every user or function needs all capabilities and access to all data.


Creating a Communications Framework for Platform Engineering

When platform teams focus exclusively on technical excellence while neglecting a communication strategy, they create an invisible barrier between the platform’s capability and its business impact. Users can’t adopt what they don’t understand, and leadership won’t invest in what they can’t measure. ... To overcome engineers’ skepticism of new tools that may introduce complexity, your communication should clearly articulate how the platform simplifies their work. Highlight its ability to reduce cognitive load, minimize context switching, enhance access to documentation and accelerate development cycles. Present these advantages as concrete improvements to daily workflows, rather than abstract concepts. ... Tap into the influence of respected technical colleagues who have contributed to the platform’s development or were early adopters. Their endorsements are more impactful than any official messaging. Facilitate opportunities for these champions to demonstrate the platform’s capabilities through lightning talks, recorded demos or pair programming sessions. These peer-to-peer interactions allow potential users to observe practical applications firsthand and ask candid questions in a low-pressure environment.


Why data sovereignty is an enabler of Europe’s digital future

Data sovereignty has broad reaching implications with potential impact on many areas of a business extending beyond the IT department. One of the most obvious examples is for the legal and finance departments, where GDPR and similar legislation require granular control over how data is stored and handled. The harsh reality is that any gaps in compliance could result in legal action, substantial fines and subsequent damage to longer term reputation. Alongside this, providing clarity on data governance increasingly factors into trust and competitive advantage, with customers and partners keen to eliminate grey areas around data sovereignty. ... One way that many companies are seeking to gain more control and visibility of their data is by repatriating specific data sets from public cloud environments over to on-premise storage or private clouds. This is not about reversing cloud technology; instead, repatriation is a sound way of achieving compliance with local legislation and ensuring there is no scope for questions over exactly where data resides. In some instances, repatriating data can improve performance, reduce cloud costs and it can also provide assurance that data is protected from foreign government access. Additionally, on-premise or private cloud setups can offer the highest levels of security from third-party risks for the most sensitive or proprietary data.

Daily Tech Digest - June 20, 2025


Quote for the day:

"Everything you’ve ever wanted is on the other side of fear." -- George Addair



Encryption Backdoors: The Security Practitioners’ View

On the one hand, “What if such access could deliver the means to stop crime, aid public safety and stop child exploitation?” But on the other hand, “The idea of someone being able to look into all private conversations, all the data connected to an individual, feels exposing and vulnerable in unimaginable ways.” As a security practitioner he has both moral and practical concerns. “Even if lawful access isn’t the same as mass surveillance, it would be difficult to distinguish between ‘good’ and ‘bad’ users without analyzing them all.” Morally, it is a reversal of the presumption of innocence and means no-one can have any guaranteed privacy. Professionally he says, “Once the encryption can be broken, once there is a backdoor allowing someone to access data, trust in that vendor will lessen due to the threat to security and privacy introducing another attack vector into the equation.” It is this latter point that is the focus for most security practitioners. “From a practitioner’s standpoint,” says Rob T Lee, chief of research at SANS Institute and founder at Harbingers, “we’ve seen time and again that once a vulnerability exists, it doesn’t stay in the hands of the ‘good guys’ for long. It becomes a target. And once it’s exploited, the damage isn’t theoretical. It affects real people, real businesses, and critical infrastructure.”


Visa CISO Subra Kumaraswamy on Never Allowing Cyber Complacency

Kumaraswamy is always thinking about talent and technology in cybersecurity. Talent is a perennial concern in the industry, and Visa is looking to grow its own. The Visa Payments Learning Program, launched in 2023, aims to help close the skills gap in cyber through training and certification. “We are offering this to all of the employees. We’re offering it to our partners, like the banks, our customers,” says Kumaraswamy. Right now, Visa leverages approximately 115 different technologies in cyber, and Kumaraswamy is constantly evaluating where to go next. “How do I [get to] the 116th, 117th, 181th?” he asks. ”That needs to be added because every layer counts.” Of course, GenAI is a part of that equation. Thus far, Kumaraswamy and his team are exploring more than 80 different GenAI initiatives within cyber. “We’ve already taken about three to four of those initiatives … to the entire company. That includes the what we call a ‘shift left’ process within Visa. It is now enabled with agentic AI. It’s reducing the time to find bugs in the code. It is also helping reduce the time to investigate incidents,” he shares. Visa is also taking its best practices in cybersecurity and sharing them with their customers. “We can think of this as value-added services to the mid-size banks, the credit unions, who don’t have the scale of Visa,” says Kumaraswamy.


Agentic AI in automotive retail: Creating always-on sales teams

To function effectively, digital agents need memory. This is where memory modules come into play. These components store key facts about ongoing interactions, such as the customer’s vehicle preferences, budget, and previous questions. For instance, if a returning visitor had previously shown interest in SUVs under a specific price range, the memory module allows the AI to recall that detail. Instead of restarting the conversation, the agent can pick up where it left off, offering an experience that feels personalised and informed. Memory modules are critical for maintaining consistency across long or repeated interactions. Without them, agentic AI would struggle to replicate the attentive service provided by a human salesperson who remembers returning customers. ... Despite the intelligence of agentic AI, there are scenarios where human involvement is still needed. Whether due to complex financing questions or emotional decision-making, some buyers prefer speaking to a person before finalizing their decision. A well-designed agentic system should recognize when it has reached the limits of its capabilities. In such moments, it should facilitate a handover to a human representative. This includes summarizing the conversation so far, alerting the sales team in real-time, and scheduling a follow-up if required.


Multicloud explained: Why it pays to diversify your cloud strategy

If your cloud provider were to suffer a massive and prolonged outage, that would have major repercussions on your business. While that’s pretty unlikely if you go with one of the hyperscalers, it’s possible with a more specialized vendor. And even with the big players, you may discover annoyances, performance problems, unanticipated charges, or other issues that might cause you to rethink your relationship. Using services from multiple vendors makes it easier to end a relationship that feels like it’s gone stale without you having to retool your entire infrastructure. It can be a great means to determine which cloud providers are best for which workloads. And it can’t hurt as a negotiating tactic when contracts expire or when you’re considering adding new cloud services. ... If you add more cloud resources by adding services from a different vendor, you’ll need to put in extra effort to get the two clouds to play nicely together, a process that can range from “annoying” to “impossible.” Even after bridging the divide, there’s administrative overhead involved—it’ll be harder to keep tabs on data protection and privacy, for instance, and you’ll need to track cloud usage and the associated costs for multiple vendors. Network bandwidth. Many vendors make it cheap and easy to move data to and within their cloud, but might make you pay a premium to export it. 


Decentralized Architecture Needs More Than Autonomy

Decentralized architecture isn’t just a matter of system design - it’s a question of how decisions get made, by whom, and under what conditions. In theory, decentralization empowers teams. In practice, it often exposes a hidden weakness: decision-making doesn’t scale easily. We started to feel the cracks as our teams expanded quickly and our organizational landscape became more complex. As teams multiplied, architectural alignment started to suffer - not because people didn’t care, but because they didn’t know how or when to engage in architectural decision-making. ... The shift from control to trust requires more than mindset - it needs practice. We leaned into a lightweight but powerful toolset to make decentralized decision-making work in real teams. Chief among them is the Architectural Decision Record (ADR). ADRs are often misunderstood as documentation artifacts. But in practice, they are confidence-building tools. They bring visibility to architectural thinking, reinforce accountability, and help teams make informed, trusted decisions - without relying on central authority. ... Decentralized architecture works best when decisions don’t happen in isolation. Even with good individual practices - like ADRs and advice-seeking - teams still need shared spaces to build trust and context across the organization. That’s where Architecture Advice Forums come in.


4 new studies about agentic AI from the MIT Initiative on the Digital Economy

In their study, Aral and Ju found that human-AI pairs excelled at some tasks and underperformed human-human pairs on others. Humans paired with AI were better at creating text but worse at creating images, though campaigns from both groups performed equally well when deployed in real ads on social media site X. Looking beyond performance, the researchers found that the actual process of how people worked changed when they were paired with AI . Communication (as measured by messages sent between partners) increased for human-AI pairs, with less time spent on editing text and more time spent on generating text and visuals. Human-AI pairs sent far fewer social messages, such as those typically intended to build rapport. “The human-AI teams focused more on the task at hand and, understandably, spent less time socializing, talking about emotions, and so on,” Ju said. “You don’t have to do that with agents, which leads directly to performance and productivity improvements.” As a final part of the study, the researchers varied the assigned personality of the AI agents using the Big Five personality traits: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The AI personality pairing experiments revealed that programming AI personalities to complement human personalities greatly enhanced collaboration. 


DevOps Backup: Top Reasons for DevOps and Management

Depending on the industry, you may need to comply with different security protocols, acts, certifications, and standards. If your company operates in a highly regulated industry, like healthcare, technology, financial services, pharmaceuticals, manufacturing, or energy, those security and compliance regulations and protocols can be even more strict. Thus, to meet the compliance stringent security requirements, your organization needs to implement security measures, like role-based access controls, encryption, ransomware protection measures — just to name RTOs and RPOs, risk-assessment plans, and other compliance best practices… And, of course, a backup and disaster recovery plan is one of them, too. It ensures that the company will be able to restore its critical data fast, guaranteeing the data availability, accessibility, security, and confidentiality of your data. ... Another issue that is closely related to compliance is data retention. Some compliance regulations require organizations to keep their data for a long time. As an example, we can mention NIST’s requirements from its Security and Privacy Controls for Information Systems and Organizations: “… Storing audit records on separate systems or components applies to initial generation as well as backup or long-term storage of audit records…”


How AI can save us from our 'infinite' workdays, according to Microsoft

Activity is not the same as progress. What good is work if it's just busy work and not tackling the right tasks or goals? Here, Microsoft advises adopting the Pareto Principle, which postulates that 20% of the work should deliver 80% of the outcomes. And how does this involve AI? Use AI agents to handle low-value tasks, such as status meetings, routine reports, and administrative churn. That frees up employees to focus on deeper tasks that require the human touch. For this, Microsoft suggested watching the leadership keynote from the Microsoft 365 Community Conference on Building the Future Firm. ... Instead of using an org chart to delineate roles and responsibilities, turn to a work chart. A work chart is driven more by outcome, in which teams are organized around a specific goal. Here, you can use AI to fill in some of the gaps, again freeing up employees for more in-depth work. ... Finally, Microsoft pointed to a new breed of professionals known as agent bosses. They handle the infinite workday not by putting in more hours but by working smarter. One example cited in the report is Alex Farach, a researcher at Microsoft. Instead of getting swamped in manual work, Farach uses a trio of AI agents to act as his assistants. One collects daily research. The second runs statistical analysis. And the third drafts briefs to tie all the data together.
 

Data Governance and AI Governance: Where Do They Intersect?

AIG and DG share common responsibilities in guiding data as a product that AI systems create and consume, despite their differences. Both governance programs evaluate data integration, quality, security, privacy, and accessibility. For instance, both governance frameworks need to ensure quality information meets business needs. If a major retailer discovered their AI-powered product recommendation engine was suggesting irrelevant items to customers, then DG and AIG would want the issue resolved. However, either approach or a combination could be best to solving the problem. Determining the right governance response requires analyzing the root issue. ... DG and AIG provide different approaches; which works best depends on the problem. Take the example, above, of the inaccurate pricing information to a customer in response to a query. The data governance team audits the product data pipeline and finds inconsistent data standards and missing attributes feeding into the AI model. However, the AI governance team also identifies opportunities to enhance the recommendation algorithm’s logic for weighting customer preferences. The retailer could resolve the data quality issues through DG while AIG improved the AI model’s mechanics by taking a collaborative approach with both data governance and AI governance perspectives. 


Deepfake Rebellion: When Employees Become Targets

Surviving and mitigating such an attack requires moving beyond purely technological solutions. While AI detection tools can help, the first and most critical line of defense lies in empowering the human factor. A resilient organization builds its bulwarks on human risk management and security awareness training, specifically tailored to counter the mental manipulation inherent in deepfake attacks. Rapidly deploy trained ambassadors. These are not IT security personnel, but respected peers from diverse departments trained to coach workshops. ... Leadership must address employees first, acknowledge the incident, express understanding of the distress caused, and unequivocally state the deepfake is under investigation. Silence breeds speculation and distrust. There should be channels for employees to voice concerns, ask questions, and access support without fear of retribution. This helps to mitigate panic and rebuild a sense of community. Ensure a unified public response, coordinating Comms, Legal, and HR. ... The antidote to synthetic mistrust is authentic trust, built through consistent leadership, transparent communication, and demonstrable commitment to shared values. The goal is to create an environment where verification habits are second nature. It’s about discerning malicious fabrication from human error or disagreement.

Daily Tech Digest - June 19, 2025


Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis


Introduction to Cloud Native Computing

In cloud native systems, security requires a different approach compared to traditional architectures. In a distributed system, the old “castle and moat” model of creating secure perimeter around vital systems, applications, APIs and data is not feasible. In a cloud native architecture, the “castles” are distributed across various environments — public and private cloud, on-prem — and they may pop up and disappear in seconds. ... DevSecOps integrates security practices within the DevOps process, ensuring that security is a shared responsibility and is considered at every stage of the software development life cycle. Implementing DevSecOps in a cloud native context helps organizations maintain robust security postures while capitalizing on the agility and speed of cloud native development. ... Cloud native applications often operate in dynamic environments that are subject to rapid changes. By adopting the following strategies and practices, cloud native applications can effectively scale in response to user demands and environmental changes, ensuring high performance and user satisfaction. ... By strategically adopting hybrid and multicloud approaches and effectively managing their complexities, organizations can significantly enhance their agility, resilience, and operational efficiency in the cloud native landscape. While hybrid and multicloud strategies offer benefits, they also introduce complexity in management. 


How a New CIO Can Fix the Mess Left by Their Predecessor

The new CIO should listen to IT teams, business stakeholders, and end-users to uncover pain points and achieve quick wins that will build credibility, says Antony Marceles, founder of Pumex, a software development and technology integration company in an online interview. Whether to rebuild or repair depends on the architecture's integrity. "Sometimes, patching legacy systems only delays the inevitable, but in other cases smart triage can buy time for a thoughtful transformation." ... Support can often come from unconventional corners, such as high-performing team leads, finance partners, or external advisors, all of whom may have experienced their own transitions, Marceles says. "The biggest mistake is trying to fix everything at once or imposing top-down change without context," he notes. "A new CIO needs to balance urgency with empathy, understanding that cleaning up someone else’s mess is as much about culture repair as it is about tech realignment." ... When you inherit a messy situation, it's both a technical and leadership challenge, de Silva says. "The best thing you can do is lead with transparency, make thoughtful decisions, and rebuild confidence across the organization." People want to see steady hands and clear thinking, he observes. "That goes a long way in these situations."


Every Business Is Becoming An AI Company. Here's How To Do It Right

“The extent to which we can use AI to augment the curious, driven and collaborative tendencies of our teams, the more optimistic we can be about their ability to develop new, unimagined innovations that open new streams of revenue,” Aktar writes. Otherwise, executives may expect more from employees without considering that new tech tools require training to use well, and troubleshooting to maintain. Plus, automated production routinely requires human intervention to protect quality. If executives merely expect teams to churn out more work — seeing AI tools and services as a way to reduce headcount — the result may be additional work and lower morale. “Workers report spending more time reviewing AI-generated content and learning tool complexities than the time these tools supposedly save,” writes Forbes contributor Luis Romero, the founder of GenStorm AI. ... “What draws people in now isn’t just communication. It’s the sense that someone notices effort before asking for output,” writes Forbes contributor Vibhas Ratanjee, a Gallup researcher who specializes in leadership development. “Most internal tools are built to save time. Fewer steps. Smoother clicks. But frictionless doesn’t always mean thoughtful. When we remove human pauses, we risk removing the parts that build connection.”


Four Steps for Turning Data Clutter into Competitive Power: Your Sovereign AI and Data Blueprint

The ability to act on data in real-time isn’t just beneficial—it’s a necessity in today’s fast-paced world. Accenture reports that companies able to leverage real-time data are 2.5 times more likely to outperform competitors. Consider Uber, which adjusts its pricing dynamically based on real-time factors like demand, traffic, and weather conditions. This near-instant capability drives business success by aligning offerings with evolving customer needs. Companies stand a lot to gain by giving frontline employees the ability to make informed, real-time decisions. But in order to do so, they need a near-instant understanding of customer data. This means the data needs to flow seamlessly across domains so that real-time models can provide timely information to help workers make impactful decisions. ... The success of AI initiatives depends on the ability to access, govern, and process at scale. Therefore, the success of an enterprise’s AI initiatives hinges on its ability to access its data anywhere, anytime—while maintaining compliance. These new demands require a governance framework that operates across environments—from on-premise to private and public clouds—while maintaining flexibility and compliance every step of the way. Companies like Netflix, which handles billions of daily data events, rely on sophisticated data architectures to support AI-driven recommendations.


Third-party risk management is broken — but not beyond repair

The consequences of this checkbox culture extend beyond ineffective risk management and have led to “questionnaire fatigue” among vendors. In many cases, security questionnaires are delivered as one-size-fits-all templates, an approach that floods recipients with static, repetitive questions, many of which aren’t relevant to their specific role or risk posture. Without tailoring or context, these reviews become procedural exercises rather than meaningful evaluations. The result is surface-level engagement, where companies appear to conduct due diligence but in fact miss critical insights. Risk profiles end up looking complete on paper while failing to capture the real-world complexity of the threats they’re meant to address. ... To break away from this harmful cycle, organizations must overhaul their approach to TPRM from the ground up by adopting a truly risk-based approach that moves beyond simple compliance. This requires developing targeted, substantive security questionnaires that prioritize depth over breadth and get to the heart of a vendor’s security practices. Rather than sending out blanket questionnaires, organizations should create assessments that are specific, relevant, and probing, asking questions that genuinely reveal the strengths and weaknesses of a vendor’s cybersecurity posture. This emphasis on quality over quantity in assessments allows organizations to move away from treating TPRM as a paperwork exercise and back toward its original intent: effective risk management.


The rise of agentic AI and what it means for ANZ enterprise

Agentic AI has unique benefits, but it also presents unique risks, and as more organisations adopt agentic AI, they're discovering that robust data governance— the establishment of policies, roles, and technology to manage and safeguard an organization's data assets—is essential when it comes to ensuring that these systems function securely and effectively. ... Effective governance is on the rise because it helps address critical AI-related security and productivity issues like preventing data breaches and reducing AI-related errors. Without strong data governance measures, agents may inadvertently expose sensitive information or make flawed autonomous decisions. With strong data governance measures, organisations can proactively safeguard their data by implementing comprehensive governance policies and deploying technologies to monitor AI runtime environments. This not only enhances security but also ensures that agentic AI tools operate optimally, delivering significant value with minimal risk. ... To grapple with these and other AI-related challenges, Gartner now recommends that organisations apply its AI TRiSM (trust, risk, and security management) frameworks to their data environments. Data and information governance are a key part of this framework, along with AI governance and AI runtime inspection and enforcement technology. 


Choosing a Clear Direction in the Face of Growing Cybersecurity Demands

CISO’s must balance multiple priorities with many facing overwhelming workloads, budget constraints, insufficient board-level support and unreasonable demands. From a revenue perspective they must align cybersecurity strategies with business goals, ensuring that security investments support revenue generation and protect critical assets. They’re under pressure to automate repetitive tasks, consolidating and streamlining processes while minimizing downtime and disruption. And then there is AI and the potential benefits it may bring to the security team and to the productivity of users. But all the while remembering that with AI, we have put technology in the hands of users, who have not traditionally been good with tech, because we’ve made it easier and quicker than ever before. ... They need to choose one key goal rather than trying to do everything. Do I want to “go faster” and innovate? Or do I want to become a more efficient business and “do more” with less Whichever they opt for, they also need to figure out all the different tools to use to accomplish that goal. This is where cybersecurity automation and AI comes into play. Using AI, machine learning, and automated tools to detect, prevent, and respond to cyber threats without human intervention, CISOs can streamline their security operations, reduce manual workload, and improve response times to cyberattacks and, in effect, do more with less.


Will AI replace humans at work? 4 ways it already has the edge

There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos. AI models can do the job blazingly fast, a capability with important industrial applications. ... AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally. AI models can do this for every single product, TV show, website, and internet user. ... AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. 


8 steps to ensure data privacy compliance across borders

Given the conflicting and evolving nature of global privacy laws, a one-size-fits-all approach is ineffective. Instead, companies should adopt a baseline standard that can be applied globally. “We default to the strictest applicable standard,” says Kory Fong, VP of engineering at Private AI in Toronto. “Our baseline makes sure we can flexibly adapt to regional laws without starting from scratch each time a regulation changes.” ... “It’s about creating an environment where regulatory knowledge is baked into day-to-day decision making,” he says. “We regularly monitor global policy developments and involve our privacy experts early in the planning process so we’re prepared, not just reactive.” Alex Spokoiny, CIO at Israel’s Check Point Software Technologies, says to stay ahead of emerging regulations, his company has moved away from rigid policies to a much more flexible, risk-aware approach. “The key is staying close to what data we collect, where it flows, and how it’s used so we can adjust quickly when new rules come up,” he says. ... Effective data privacy management requires a multidisciplinary approach, involving IT, legal, compliance, and product teams. “Cross-functional collaboration is built into our steering teams,” says Lexmark’s Willett. “Over the years, we’ve fundamentally transformed our approach to data governance by establishing the Enterprise Data Governance and Ethics community.”


Leading without titles: The rise of influence-driven leadership

Leadership isn’t about being in charge—it’s about showing up when it matters, listening when it's hardest, and holding space when others need it most. It’s not about corner offices or formal titles—it’s about quiet strength, humility, and the courage to uplift. The leaders who will shape the future are not defined by their job descriptions, but by how they make others feel—especially in moments of uncertainty. The associate who lifts a teammate’s spirits, the manager who creates psychological safety, the engineer who ensures quieter voices are heard—these are the ones redefining leadership through compassion, not control. As Simon Sinek reminds us, "Leadership is not about being in charge. It is about taking care of those in your charge." Real leadership leaves people better than it found them. It inspires not by authority, but by action. It earns loyalty not through power, but through presence. According to Gartner (2024), 74% of employees are more likely to stay in organisations where leadership is approachable, transparent, and grounded in shared values—not status. Let’s recognise these leaders. Let’s build cultures that reward empathy, connection, and quiet courage. Because true leadership makes people feel seen—not small.