Showing posts with label cyber threat. Show all posts
Showing posts with label cyber threat. Show all posts

Daily Tech Digest - August 09, 2025


Quote for the day:

“Develop success from failures. Discouragement and failure are two of the surest stepping stones to success.” -- Dale Carnegie


Is ‘Decentralized Data Contributor’ the Next Big Role in the AI Economy?

Training AI models requires real-world, high-quality, and diverse data. The problem is that the astronomical demand is slowly outpacing the available sources. Take public datasets as an example. Not only is this data overused, but it’s often restricted to avoid privacy or legal concerns. There’s also a huge issue with geographic or spatial data gaps where the information is incomplete regarding specific regions, which can and will lead to inaccuracies or biases with AI models. Decentralized contributors can help bust these challenges. ... Even though a large part of the world’s population has no problem with passively sharing data when browsing the web, due to the relative infancy of decentralized systems, active data contribution may seem to many like a bridge too far. Anonymized data isn’t 100% safe. Determined threat actor parties can sometimes re-identify individuals from unnamed datasets. The concern is valid, which is why decentralized projects working in the field must adopt privacy-by-design architectures where privacy is a core part of the system instead of being layered on top after the fact. Zero-knowledge proofs is another technique that can reduce privacy risks by allowing contributors to prove the validity of the data without exposing any information. For example, demonstrating their identity meets set criteria without divulging anything identifiable.


The ROI of Governance: Nithesh Nekkanti on Taming Enterprise Technical Debt

A key symptom of technical debt is rampant code duplication, which inflates maintenance efforts and increases the risk of bugs. A multi-pronged strategy focused on standardization and modularity proved highly effective, leading to a 30% reduction in duplicated code. This initiative went beyond simple syntax rules to forge a common development language, defining exhaustive standards for Apex and Lightning Web Components. By measuring metrics like technical debt density, teams can effectively track the health of their codebase as it evolves. ... Developers may perceive stricter quality gates as a drag on velocity, and the task of addressing legacy code can seem daunting. Overcoming this resistance requires clear communication and a focus on the long-term benefits. "Driving widespread adoption of comprehensive automated testing and stringent code quality tools invariably presents cultural and operational challenges," Nekkanti acknowledges. The solution was to articulate a compelling vision. ... Not all technical debt is created equal, and a mature governance program requires a nuanced approach to prioritization. The PEC developed a technical debt triage framework to systematically categorize issues based on type, business impact, and severity. This structured process is vital for managing a complex ecosystem, where a formal Technical Governance Board (TGB) can use data to make informed decisions about where to invest resources.


Why Third-Party Risk Management (TPRM) Can’t Be Ignored in 2025

In today’s business world, no organization operates in a vacuum. We rely on vendors, suppliers, and contractors to keep things running smoothly. But every connection brings risk. Just recently, Fortinet made headlines as threat actors were found maintaining persistent access to FortiOS and FortiProxy devices using known vulnerabilities—while another actor allegedly offered a zero-day exploit for FortiGate firewalls on a dark web forum. These aren’t just IT problems—they’re real reminders of how vulnerabilities in third-party systems can open the door to serious cyber threats, regulatory headaches, and reputational harm. That’s why Third-Party Risk Management (TPRM) has become a must-have, not a nice-to-have. ... Think of TPRM as a structured way to stay on top of the risks your third parties, suppliers and vendors might expose you to. It’s more than just ticking boxes during onboarding—it’s an ongoing process that helps you monitor your partners’ security practices, compliance with laws, and overall reliability. From cloud service providers, logistics partners, and contract staff to software vendors, IT support providers, marketing agencies, payroll processors, data analytics firms, and even facility management teams—if they have access to your systems, data, or customers, they’re part of your risk surface. 


Ushering in a new era of mainframe modernization

One of the key challenges in modern IT environments is integrating data across siloed systems. Mainframe data, despite being some of the most valuable in the enterprise, often remains underutilized due to accessibility barriers. With a z17 foundation, software data solutions can more easily bridge critical systems, offering unprecedented data accessibility and observability. For CIOs, this is an opportunity to break down historical silos and make real-time mainframe data available across cloud and distributed environments without compromising performance or governance. As data becomes more central to competitive advantage, the ability to bridge existing and modern platforms will be a defining capability for future-ready organizations. ... For many industries, mainframes continue to deliver unmatched performance, reliability, and security for mission-critical workloads—capabilities that modern enterprises rely on to drive digital transformation. Far from being outdated, mainframes are evolving through integration with emerging technologies like AI, automation, and hybrid cloud, enabling organizations to modernize without disruption. With decades of trusted data and business logic already embedded in these systems, mainframes provide a resilient foundation for innovation, ensuring that enterprises can meet today’s demands while preparing for tomorrow’s challenges.


Fighting Cyber Threat Actors with Information Sharing

Effective threat intelligence sharing creates exponential defensive improvements that extend far beyond individual organizational benefits. It not only raises the cost and complexity for attackers but also lowers their chances of success. Information Sharing and Analysis Centers (ISACs) demonstrate this multiplier effect in practice. ISACs are, essentially, non-profit organizations that provide companies with timely intelligence and real-world insights, helping them boost their security. The success of existing ISACs has also driven expansion efforts, with 26 U.S. states adopting the NAIC Model Law to encourage information sharing in the insurance sector. ... Although the benefits of information sharing are clear, actually implementing them is a different story. Common obstacles include legal issues regarding data disclosure, worries over revealing vulnerabilities to competitors, and the technical challenge itself – evidently, devising standardized threat intelligence formats is no walk in the park. And yet it can certainly be done. Case in point: the above-mentioned partnership between CrowdStrike and Microsoft. Its success hinges on its well-thought-out governance system, which allows these two business rivals to collaborate on threat attribution while protecting their proprietary techniques and competitive advantages. 


The Ultimate Guide to Creating a Cybersecurity Incident Response Plan

Creating a fit-for-purpose cyber incident response plan isn’t easy. However, by adopting a structured approach, you can ensure that your plan is tailored for your organisational risk context and will actually help your team manage the chaos that ensues a cyber attack. In our experience, following a step-by-step process to building a robust IR plan always works. Instead of jumping straight into creating a plan, it’s best to lay a strong foundation with training and risk assessment and then work your way up. ... Conducting a cyber risk assessment before creating a Cybersecurity Incident Response Plan is critical. Every business has different assets, systems, vulnerabilities, and exposure to risk. A thorough risk assessment identifies what assets need the most protection. The assets could be customer data, intellectual property, or critical infrastructure. You’ll be able to identify where the most likely entry points for attackers may be. This insight ensures that the incident response plan is tailored and focused on the most pressing risks instead of being a generic checklist. A risk assessment will also help you define the potential impact of various cyber incidents on your business. You can prioritise response strategies based on what incidents would be most damaging. Without this step, response efforts may be misaligned or inadequate in the face of a real threat.


How to Become the Leader Everyone Trusts and Follows With One Skill

Leaders grounded in reason have a unique ability; they can take complex situations and make sense of them. They look beyond the surface to find meaning and use logic as their compass. They're able to spot patterns others might miss and make clear distinctions between what's important and what's not. Instead of being guided by emotion, they base their decisions on credibility, relevance and long-term value. ... The ego doesn't like reason. It prefers control, manipulation and being right. At its worst, it twists logic to justify itself or dominate others. Some leaders use data selectively or speak in clever soundbites, not to find truth but to protect their image or gain power. But when a leader chooses reason, something shifts. They let go of defensiveness and embrace objectivity. They're able to mediate fairly, resolve conflicts wisely and make decisions that benefit the whole team, not just their own ego. This mindset also breaks down the old power structures. Instead of leading through authority or charisma, leaders at this level influence through clarity, collaboration and solid ideas. ... Leaders who operate from reason naturally elevate their organizations. They create environments where logic, learning and truth are not just considered as values, they're part of the culture. This paves the way for innovation, trust and progress. 


Why enterprises can’t afford to ignore cloud optimization in 2025

Cloud computing has long been the backbone of modern digital infrastructure, primarily built around general-purpose computing. However, the era of one-size-fits-all cloud solutions is rapidly fading in a business environment increasingly dominated by AI and high-performance computing (HPC) workloads. Legacy cloud solutions struggle to meet the computational intensity of deep learning models, preventing organizations from fully realizing the benefits of their investments. At the same time, cloud-native architectures have become the standard, as businesses face mounting pressure to innovate, reduce time-to-market, and optimize costs. Without a cloud-optimized IT infrastructure, organizations risk losing key operational advantages—such as maximizing performance efficiency and minimizing security risks in a multi-cloud environment—ultimately negating the benefits of cloud-native adoption. Moreover, running AI workloads at scale without an optimized cloud infrastructure leads to unnecessary energy consumption, increasing both operational costs and environmental impact. This inefficiency strains financial resources and undermines corporate sustainability goals, which are now under greater scrutiny from stakeholders who prioritize green initiatives.


Data Protection for Whom?

To be clear, there is no denying that a robust legal framework for protecting privacy is essential. In the absence of such protections, both rich and poor citizens face exposure to fraud, data theft and misuse. Personal data leakages – ranging from banking details to mobile numbers and identity documents – are rampant, and individuals are routinely subjected to financial scams, unsolicited marketing and phishing attacks. Often, data collected for one purpose – such as KYC verification or government scheme registration – finds its way into other hands without consent. ... The DPDP Act, in theory, establishes strong penalties for violations. However, the enforcement mechanisms under the Act are opaque. The composition and functioning of the Data Protection Board – a body tasked with adjudicating complaints and imposing penalties – are entirely controlled by the Union government. There is no independent appointments process, no safeguards against arbitrary decision-making, and no clear procedure for appeals. Moreover, there is a genuine worry that smaller civil society initiatives – such as grassroots surveys, independent research and community-based documentation efforts – will be priced out of existence. The compliance costs associated with data processing under the new framework, including consent management, data security audits and liability for breaches, are likely to be prohibitive for most non-profit and community-led groups.


Stargate’s slow start reveals the real bottlenecks in scaling AI infrastructure

“Scaling AI infrastructure depends less on the technical readiness of servers or GPUs and more on the orchestration of distributed stakeholders — utilities, regulators, construction partners, hardware suppliers, and service providers — each with their own cadence and constraints,” Gogia said. ... Mazumder warned that “even phased AI infrastructure plans can stall without early coordination” and advised that “enterprises should expect multi-year rollout horizons and must front-load cross-functional alignment, treating AI infra as a capital project, not a conventional IT upgrade.” ... Given the lessons from Stargate’s delays, analysts recommend a pragmatic approach to AI infrastructure planning. Rather than waiting for mega-projects to mature, Mazumder emphasized that “enterprise AI adoption will be gradual, not instant and CIOs must pivot to modular, hybrid strategies with phased infrastructure buildouts.” ... The solution is planning for modular scaling by deploying workloads in hybrid and multi-cloud environments so progress can continue even when key sites or services lag. ... For CIOs, the key lesson is to integrate external readiness into planning assumptions, create coordination checkpoints with all providers, and avoid committing to go-live dates that assume perfect alignment.

Daily Tech Digest - July 27, 2025


Quote for the day:

"The only way to do great work is to love what you do." -- Steve Jobs


Amazon AI coding agent hacked to inject data wiping commands

The hacker gained access to Amazon’s repository after submitting a pull request from a random account, likely due to workflow misconfiguration or inadequate permission management by the project maintainers. ... On July 23, Amazon received reports from security researchers that something was wrong with the extension and the company started to investigate. Next day, AWS released a clean version, Q 1.85.0, which removed the unapproved code. “AWS is aware of and has addressed an issue in the Amazon Q Developer Extension for Visual Studio Code (VSC). Security researchers reported a potential for unapproved code modification,” reads the security bulletin. “AWS Security subsequently identified a code commit through a deeper forensic analysis in the open-source VSC extension that targeted Q Developer CLI command execution.” “After which, we immediately revoked and replaced the credentials, removed the unapproved code from the codebase, and subsequently released Amazon Q Developer Extension version 1.85.0 to the marketplace.” AWS assured users that there was no risk from the previous release because the malicious code was incorrectly formatted and wouldn’t run on their environments.


How to migrate enterprise databases and data to the cloud

Migrating data is only part of the challenge; database structures, stored procedures, triggers and other code must also be moved. In this part of the process, IT leaders must identify and select migration tools that address the specific needs of the enterprise, especially if they’re moving between different database technologies (heterogeneous migration). Some things they’ll need to consider are: compatibility, transformation requirements and the ability to automate repetitive tasks.  ... During migration, especially for large or critical systems, IT leaders should keep their on-premises and cloud databases synchronized to avoid downtime and data loss. To help facilitate this, select synchronization tools that can handle the data change rates and business requirements. And be sure to test these tools in advance: High rates of change or complex data relationships can overwhelm some solutions, making parallel runs or phased cutovers unfeasible. ... Testing is a safety net. IT leaders should develop comprehensive test plans that cover not just technical functionality, but also performance, data integrity and user acceptance. Leaders should also plan for parallel runs, operating both on-premises and cloud systems in tandem, to validate that everything works as expected before the final cutover. They should engage end users early in the process in order to ensure the migrated environment meets business needs.


Researchers build first chip combining electronics, photonics, and quantum light

The new chip integrates quantum light sources and electronic controllers using a standard 45-nanometer semiconductor process. This approach paves the way for scaling up quantum systems in computing, communication, and sensing, fields that have traditionally relied on hand-built devices confined to laboratory settings. "Quantum computing, communication, and sensing are on a decades-long path from concept to reality," said MiloÅ¡ Popović, associate professor of electrical and computer engineering at Boston University and a senior author of the study. "This is a small step on that path – but an important one, because it shows we can build repeatable, controllable quantum systems in commercial semiconductor foundries." ... "What excites me most is that we embedded the control directly on-chip – stabilizing a quantum process in real time," says Anirudh Ramesh, a PhD student at Northwestern who led the quantum measurements. "That's a critical step toward scalable quantum systems." This focus on stabilization is essential to ensure that each light source performs reliably under varying conditions. Imbert Wang, a doctoral student at Boston University specializing in photonic device design, highlighted the technical complexity.


Product Manager vs. Product Owner: Why Teams Get These Roles Wrong

While PMs work on the strategic plane, Product Owners anchor delivery. The PO is the guardian of the backlog. They translate the product strategy into epics and user stories, groom the backlog, and support the development team during sprints. They don’t just manage the “what” — they deeply understand the “how.” They answer developer questions, clarify scope, and constantly re-evaluate priorities based on real-time feedback. In Agile teams, they play a central role in turning strategic vision into working software. Where PMs answer to the business, POs are embedded with the dev team. They make trade-offs, adjust scope, and ensure the product is built right. ... Some products need to grow fast. That’s where Growth PMs come in. They focus on the entire user lifecycle, often structured using the PIRAT funnel: Problem, Insight, Reach, Activation, and Trust (a modern take on traditional Pirate Metrics, such as Acquisition, Activation, Retention, Referral, and Revenue). This model guides Growth PMs in identifying where user friction occurs and what levers to pull for meaningful impact. They conduct experiments, optimize funnels, and collaborate closely with marketing and data science teams to drive user growth. 


Ransomware payments to be banned – the unanswered questions

With thresholds in place, businesses/organisations may choose to operate differently so that they aren’t covered by the ban, such as lowering turnover or number of employees. All of this said, rules like this could help to get a better picture of what’s going on with ransomware threats in the UK. Arda Büyükkaya, senior cyber threat intelligence analyst at EclecticIQ, explains more: “As attackers evolve their tactics and exploit vulnerabilities across sectors, timely intelligence-sharing becomes critical to mounting an effective defence. Encouraging businesses to report incidents more consistently will help build a stronger national threat intelligence picture something that’s important as these attacks grow more frequent and become sophisticated. To spare any confusion, sector-specific guidance should be provided by government on how resources should be implemented, making resources clear and accessible. “Many victims still hesitate to come forward due to concerns around reputational damage, legal exposure, or regulatory fallout,” said Büyükkaya. “Without mechanisms that protect and support victims, underreporting will remain a barrier to national cyber resilience.” Especially in the earlier days of the legislation, organisations may still feel pressured to pay in order to keep operations running, even if they’re banned from doing so.


AI Unleashed: Shaping the Future of Cyber Threats

AI optimizes reconnaissance and targeting, giving hackers the tools to scour public sources, leaked and publicly available breach data, and social media to build detailed profiles of potential targets in minutes. This enhanced data gathering lets attackers identify high-value victims and network vulnerabilities with unprecedented speed and accuracy. AI has also supercharged phishing campaigns by automatically crafting phishing emails and messages that mimic an organization’s formatting and reference real projects or colleagues, making them nearly indistinguishable from genuine human-originated communications. ... AI is also being weaponized to write and adapt malicious code. AI-powered malware can autonomously modify itself to slip past signature-based antivirus defenses, probe for weaknesses, select optimal exploits, and manage its own command-and-control decisions. Security experts note that AI accelerates the malware development cycle, reducing the time from concept to deployment. ... AI presents more than external threats. It has exposed a new category of targets and vulnerabilities, as many organizations now rely on AI models for critical functions, such as authentication systems and network monitoring. These AI systems themselves can be manipulated or sabotaged by adversaries if proper safeguards have not been implemented.


Agile and Quality Engineering: Building a Culture of Excellence Through a Holistic Approach

Agile development relies on rapid iteration and frequent delivery, and this rhythm demands fast, accurate feedback on code quality, functionality, and performance. With continuous testing integrated into automated pipelines, teams receive near real-time feedback on every code commit. This immediacy empowers developers to make informed decisions quickly, reducing delays caused by waiting for manual test cycles or late-stage QA validations. Quality engineering also enhances collaboration between developers and testers. In a traditional setup, QA and development operate in silos, often leading to communication gaps, delays, and conflicting priorities. In contrast, QE promotes a culture of shared ownership, where developers write unit tests, testers contribute to automation frameworks, and both parties work together during planning, development, and retrospectives. This collaboration strengthens mutual accountability and leads to better alignment on requirements, acceptance criteria, and customer expectations. Early and continuous risk mitigation is another cornerstone benefit. By incorporating practices like shift-left testing, test-driven development (TDD), and continuous integration (CI), potential issues are identified and resolved long before they escalate. 


Could Metasurfaces be The Next Quantum Information Processors?

Broadly speaking, the work embodies metasurface-based quantum optics which, beyond carving a path toward room-temperature quantum computers and networks, could also benefit quantum sensing or offer “lab-on-a-chip” capabilities for fundamental science Designing a single metasurface that can finely control properties like brightness, phase, and polarization presented unique challenges because of the mathematical complexity that arises once the number of photons and therefore the number of qubits begins to increase. Every additional photon introduces many new interference pathways, which in a conventional setup would require a rapidly growing number of beam splitters and output ports. To bring order to the complexity, the researchers leaned on a branch of mathematics called graph theory, which uses points and lines to represent connections and relationships. By representing entangled photon states as many connected lines and points, they were able to visually determine how photons interfere with each other, and to predict their effects in experiments. Graph theory is also used in certain types of quantum computing and quantum error correction but is not typically considered in the context of metasurfaces, including their design and operation. The resulting paper was a collaboration with the lab of Marko Loncar, whose team specializes in quantum optics and integrated photonics and provided needed expertise and equipment.


New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

When faced with a complex problem, current LLMs largely rely on chain-of-thought (CoT) prompting, breaking down problems into intermediate text-based steps, essentially forcing the model to “think out loud” as it works toward a solution. While CoT has improved the reasoning abilities of LLMs, it has fundamental limitations. In their paper, researchers at Sapient Intelligence argue that “CoT for reasoning is a crutch, not a satisfactory solution. It relies on brittle, human-defined decompositions where a single misstep or a misorder of the steps can derail the reasoning process entirely.” ... To move beyond CoT, the researchers explored “latent reasoning,” where instead of generating “thinking tokens,” the model reasons in its internal, abstract representation of the problem. This is more aligned with how humans think; as the paper states, “the brain sustains lengthy, coherent chains of reasoning with remarkable efficiency in a latent space, without constant translation back to language.” However, achieving this level of deep, internal reasoning in AI is challenging. Simply stacking more layers in a deep learning model often leads to a “vanishing gradient” problem, where learning signals weaken across layers, making training ineffective. 


For the love of all things holy, please stop treating RAID storage as a backup

Although RAID is a backup by definition, practically, a backup doesn't look anything like a RAID array. That's because an ideal backup is offsite. It's not on your computer, and ideally, it's not even in the same physical location. Remember, RAID is a warranty, and a backup is insurance. RAID protects you from inevitable failure, while a backup protects you from unforeseen failure. Eventually, your drives will fail, and you'll need to replace disks in your RAID array. This is part of routine maintenance, and if you're operating an array for long enough, you should probably have drive swaps on a schedule of several years to keep everything operating smoothly. A backup will protect you from everything else. Maybe you have multiple drives fail at once. A backup will protect you. Lord forbid you fall victim to a fire, flood, or other natural disaster and your RAID array is lost or damaged in the process. A backup still protects you. It doesn't need to be a fire or flood for you to get use out of a backup. There are small issues that could put your data at risk, such as your PC being infected with malware, or trying to write (and replicate) corrupted data. You can dream up just about any situation where data loss is a risk, and a backup will be able to get your data back in situations where RAID can't. 

Daily Tech Digest - May 24, 2025


Quote for the day:

“In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it.” -- Jane Smiley



DanaBot botnet disrupted, QakBot leader indicted

Operation Endgame relies on help from a number of private sector cybersecurity companies (Sekoia, Zscaler, Crowdstrike, Proofpoint, Fox-IT, ESET, and others), non-profits such as Shadowserver and white-hat groups like Cryptolaemus. “The takedown of DanaBot represents a significant blow not just to an eCrime operation but to a cyber capability that has appeared to align Russian government interests. The case (…) highlights why we must view certain Russian eCrime groups through a political lens — as extensions of state power rather than mere criminal enterprises,” Crowdstrike commented the DanaBot disruption. ... “We’ve previously seen disruptions have significant impacts on the threat landscape. For example, after last year’s Operation Endgame disruption, the initial access malware associated with the disruption as well as actors who used the malware largely disappeared from the email threat landscape,” Selena Larson, Staff Threat Researcher at Proofpoint, told Help Net Security. “Cybercriminal disruptions and law enforcement actions not only impair malware functionality and use but also impose cost to threat actors by forcing them to change their tactics, cause mistrust in the criminal ecosystem, and potentially make criminals think about finding a different career.”


AI in Cybersecurity: Protecting Against Evolving Digital Threats

Beyond detecting threats, AI excels at automating repetitive security tasks. Tasks like patching vulnerabilities, filtering malicious traffic, and conducting compliance checks can be time-consuming. AI’s speed and precision in handling these tasks free up cybersecurity professionals to focus on complex problem-solving. ... The integration of AI into cybersecurity raises ethical questions that must be addressed. Privacy concerns are at the forefront, as AI systems often rely on extensive data collection. This creates potential risks for mishandling or misuse of sensitive information. Additionally, AI’s capabilities for surveillance can lead to overreach. Governments and corporations may deploy AI tools for monitoring activities under the guise of security, potentially infringing on individual rights. There is also the risk of malicious actors repurposing legitimate AI tools for nefarious purposes. Clear guidelines and robust governance are crucial to ensuring responsible AI deployment in cybersecurity. ... The growing role of AI in cybersecurity necessitates strong regulatory frameworks. Governments and organizations are working to establish policies that address AI’s ethical and operational challenges in this field. Transparency in AI decision-making processes and standardized best practices are among the key priorities.


Open MPIC project defends against BGP attacks on certificate validation

MPIC is a method to enhance the security of certificate issuance by validating domain ownership and CA checks from multiple network vantage points. It helps prevent BGP hijacking by ensuring that validation checks return consistent results from different geographical locations. The goal is to make it more difficult for threat actors to compromise certificate issuance by redirecting internet routes. ... Open MPIC operates through a parallel validation architecture that maximizes efficiency while maintaining security. When a domain validation check is initiated, the framework simultaneously queries all configured perspectives and collects their results. “If you have 10 perspectives, then it basically asks all 10 perspectives at the same time, and then it will collect the results and determine the quorum and give you a thumbs up or thumbs down,” Sharkov said. This approach introduces some unavoidable latency, but the implementation minimizes performance impact through parallelization. Sharkov noted that the latency is still just a fraction of a second. ... The open source nature of the project addresses a significant challenge for the industry. While large certificate authorities often have the resources to build their own solutions, many smaller CAs would struggle with the technical and infrastructure requirements of multi-perspective validation.


How to Close the Gap Between Potential and Reality in Tech Implementation

First, there has to be alignment between the business and tech sides. So, I’ve seen in many institutions that there’s not complete alignment between both. And where they could be starting, they sometimes separate and they go in opposite directions. Because at the end of the day, let’s face it, we’re all looking at how it will help ourselves. Secondly, it’s just the planning, ensuring that you check all the boxes and have a strong implementation plan. One recent customer who just joined Backbase: One of the things I loved about what they brought to the kickoff call was what success looked like to them for implementation. So, they had the work stream, whether the core integration, the call center, their data strategy, or their security requirements. Then, they had the leader who was the overall owner and then they had the other owners of each work stream. Then, they defined success criteria with the KPIs associated with those success criteria. ... Many folks forget that they are, most of the time, still running on a legacy platform. So, for me, success is when they decommission that legacy platform and a hundred percent of their members or customers are on Backbase. That’s one of the very important internal KPIs.


How AIOps sharpens cybersecurity posture in the age of cyber threats

The good news is, AIOps platforms are built to scale with complexity, adapting to new environments, users, and risks as they develop. And organizations can feel reassured that their digital vulnerabilities are safeguarded for the long term. For example, modern methods of attack, such as hyperjacking, can be identified and mitigated with AIOps. This form of attack in cloud security is where a threat actor gains control of the hypervisor – the software that manages virtual machines on a physical server. It allows them to then take over the virtual machines running on that hypervisor. What makes hyperjacking especially dangerous is that it operates beneath the guest operating systems, effectively evading traditional monitoring tools that rely on visibility within the virtual machines. As a result, systems lacking deep observability are the most vulnerable. This makes the advanced observability capabilities of AIOps essential for detecting and responding to such stealthy threats. Naturally, this evolving scope of digital malice also requires compliance rules to be frequently reviewed. When correctly configured, AIOps can support organizations by interpreting the latest guidelines and swiftly identifying the data deviations that would otherwise incur penalties.


Johnson & Johnson Taps AI to Advance Surgery, Drug Discovery

J&J's Medical Engagement AI redefines care delivery, identifying 75,000 U.S. patients with unmet needs across seven disease areas, including oncology. Its analytics engine processes electronic health records and clinical guidelines to highlight patients missing optimal treatments. A New York oncologist, using J&J's insights, adjusted treatment for 20 patients in 2024, improving the chances of survival. The platform engages over 5,000 providers, empowering medical science liaisons with real-time data. It helps the AI innovation team turn overwhelming data into an advantage. Transparent data practices and a focus on patient outcomes align with J&J's ethical standards, making this a model that bridges tech and care. ... J&J's AI strategy rests on five ethical pillars, including fairness, privacy, security, responsibility and transparency. It aims to deliver AI solutions that benefit all stakeholders equitably. The stakeholders and users understand the methods through which datasets are collected and how external influences, such as biases, may affect them. Bias is mitigated through annual data audits, privacy is upheld with encrypted storage and consent protocols, and on top of it is AI-driven cybersecurity monitoring. A training program, launched in 2024, equipped 10,000 employees to handle sensitive data. 


Surveillance tech outgrows face ID

Many oppose facial recognition technology because it jeopardizes privacy, civil liberties, and personal security. It enables constant surveillance and raises the specter of a dystopian future in which people feel afraid to exercise free speech.Another issue is that one’s face can’t be changed like a password can, so if face-recognition data is stolen or sold on the Dark Web, there’s little anyone can do about the resulting identity theft and other harms. .... You can be identified by your gait (how you walk). And surveillance cameras now use AI-powered video analytics to track behavior, not just faces. They can follow you based on your clothing, the bag you carry, and your movement patterns, stitching together your path across a city or a stadium without ever needing a clear shot of your face. The truth is that face recognition is just the most visible part of a much larger system of surveillance. When public concern about face recognition causes bans or restrictions, governments, companies, and other organizations simply circumvent that concern by deploying other technologies from a large and growing menu of options. Whether we’re IT professionals, law enforcement technologists, security specialists, or privacy advocates, it’s important to incorporate the new identification technologies into our thinking, and face the new reality that face recognition is just one technology among many.


How Ready Is NTN To Go To Scale?

Non-Terrestrial Networks (NTNs) represent a pivotal advancement in global communications, designed to extend connectivity far beyond the limits of ground-based infrastructure. By leveraging spaceborne and airborne assets—such as Low Earth Orbit (LEO), Medium Earth Orbit (MEO), and Geostationary (GEO) satellites, as well as High-Altitude Platform Stations (HAPS) and UAVs—NTNs enable seamless coverage in regions previously considered unreachable. Whether traversing remote deserts, deep oceans, or mountainous terrain, NTNs provide reliable, scalable connectivity where traditional terrestrial networks fall short or are economically unviable. This paradigm shift is not merely about extending signal reach; it’s about enabling entirely new categories of applications and industries to thrive in real time. ... A core feature of NTNs is their use of varied orbital altitudes, each offering distinct performance characteristics. Low Earth Orbit (LEO) satellites (altitudes of 500–2,000 km) are known for their low latency (20–50 ms) and are ideal for real-time services. Medium Earth Orbit (MEO) systems (2,000–35,000 km) strike a balance between coverage and latency and are often used in navigation and communications. Geostationary Orbit (GEO) satellites, positioned at ~35,786 km, provide wide-area coverage from a fixed position relative to Earth’s rotation—particularly useful for broadcast and constant-area monitoring. 


Enterprises are wasting the cloud’s potential

One major key to achieving success with cloud computing is training and educating employees. Although the adoption of cloud technology signifies a significant change, numerous companies overlook the importance of equipping their staff with the technical expertise and strategic acumen to capitalize on its potential benefits. IT teams that lack expertise in cloud services may use cloud resources inefficiently or ineffectively. Business leaders who are unfamiliar with cloud tools often struggle to leverage data-driven insights that could drive innovation. Employees relying on cloud-based applications might not fully utilize all their functionality due to insufficient training. These skill gaps lead to dissatisfaction with cloud services, and the company doesn’t benefit from its investments in cloud infrastructure. ... The cloud is a tool for transforming operations rather than just another piece of IT equipment. Companies can refine their approach to the cloud by establishing effective governance structures and providing employees with training on the optimal utilization of cloud technology. Once they engage architects and synchronize cloud efforts with business objectives, most companies will see tangible results: cost savings, system efficiency, and increased innovation.


The battle to AI-enable the web: NLweb and what enterprises need to know

NLWeb enables websites to easily add AI-powered conversational interfaces, effectively turning any website into an AI app where users can query content using natural language. NLWeb isn’t necessarily about competing with other protocols; rather, it builds on top of them. The new protocol uses existing structured data formats like RSS, and each NLWeb instance functions as an MCP server. “The idea behind NLWeb is it is a way for anyone who has a website or an API already to very easily make their website or their API an agentic application,” Microsoft CTO Kevin Scott said during his Build 2025 keynote. “You really can think about it a little bit like HTML for the agentic web.” ... “NLWeb leverages the best practices and standards developed over the past decade on the open web and makes them available to LLMs,” Odewahn told VentureBeat. “Companies have long spent time optimizing this kind of metadata for SEO and other marketing purposes, but now they can take advantage of this wealth of data to make their own internal AI smarter and more capable with NLWeb.” ... “NLWeb provides a great way to open this information to your internal LLMs so that you don’t have to go hunting and pecking to find it,” Odewahn said. “As a publisher, you can add your own metadata using schema.org standard and use NLWeb internally as an MCP server to make it available for internal use.”

Daily Tech Digest - May 22, 2025


Quote for the day:

"Knowledge is being aware of what you can do. Wisdom is knowing when not to do it." -- Anonymous


Consumer rights group: Why a 10-year ban on AI regulation will harm Americans

AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report. These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use. ... Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place. But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause. It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.


Putting agentic AI to work in Firebase Studio

An AI assistant is like power steering. The programmer, the driver, remains in control, and the tool magnifies that control. The developer types some code, and the assistant completes the function, speeding up the process. The next logical step is to empower the assistant to take action—to run tests, debug code, mock up a UI, or perform some other task on its own. In Firebase Studio, we get a seat in a hosted environment that lets us enter prompts that direct the agent to take meaningful action. ... Obviously, we are a long way off from a non-programmer frolicking around in Firebase Studio, or any similar AI-powered development environment, and building complex applications. Google Cloud Platform, Gemini, and Firebase Studio are best-in-class tools. These kinds of limits apply to all agentic AI development systems. Still, I would in no wise want to give up my Gemini assistant when coding. It takes a huge amount of busy work off my shoulders and brings much more possibility into scope by letting me focus on the larger picture. I wonder how the path will look, how long it will take for Firebase Studio and similar tools to mature. It seems clear that something along these lines, where the AI is framed in a tool that lets it take action, is part of the future. It may take longer than AI enthusiasts predict. It may never really, fully come to fruition in the way we envision.


Edge AI + Intelligence Hub: A Match in the Making

The shop floor looks nothing like a data lake. There is telemetry data from machines, historical data, MES data in SQL, some random CSV files, and most of it lacks context. Companies that realize this—or already have an Industrial DataOps strategy—move quickly beyond these issues. Companies that don’t end up creating a solution that works with only telemetry data (for example) and then find out they need other data. Or worse, when they get something working in the first factory, they find out factories 2, 3, and 4 have different technology stacks. ... In comes DataOps (again). Cloud AI and Edge AI have the same problems with industrial data. They need access to contextualized information across many systems. The only difference is there is no data lake in the factory—but that’s OK. DataOps can leave the data in the source systems and expose it over APIs, allowing edge AI to access the data needed for specific tasks. But just like IT, what happens if OT doesn’t use DataOps? It’s the same set of issues. If you try to integrate AI directly with data from your SCADA, historian, or even UNS/MQTT, you’ll limit the data and context to which the agent has access. SCADA/Historians only have telemetry data. UNS/MQTT is report by exception, and AI is request/response based (i.e., it can’t integrate). But again, I digress. Use DataOps.


AI-driven threats prompt IT leaders to rethink hybrid cloud security

Public cloud security risks are also undergoing renewed assessment. While the public cloud was widely adopted during the post-pandemic shift to digital operations, it is increasingly seen as a source of risk. According to the survey, 70 percent of Security and IT leaders now see the public cloud as a greater risk than any other environment. As a result, an equivalent proportion are actively considering moving data back from public to private cloud due to security concerns, and 54 percent are reluctant to use AI solutions in the public cloud citing apprehensions about intellectual property protection. The need for improved visibility is emphasised in the findings. Rising sophistication in cyberattacks has exposed the limitations of existing security tools—more than half (55 percent) of Security and IT leaders reported lacking confidence in their current toolsets' ability to detect breaches, mainly due to insufficient visibility. Accordingly, 64 percent say their primary objective for the next year is to achieve real-time threat monitoring through comprehensive real-time visibility into all data in motion. David Land, Vice President, APAC at Gigamon, commented: "Security teams are struggling to keep pace with the speed of AI adoption and the growing complexity of and vulnerability of public cloud environments. 


Taming the Hacker Storm: Why Millions in Cybersecurity Spending Isn’t Enough

The key to taming the hacker storm is founded on the core principle of trust: that the individual or company you are dealing with is who or what they claim to be and behaves accordingly. Establishing a high-trust environment can largely hinder hacker success. ... For a pervasive selective trusted ecosystem, an organization requires something beyond trusted user IDs. A hacker can compromise a user’s device and steal the trusted user ID, making identity-based trust inadequate. A trust-verified device assures that the device is secure and can be trusted. But then again, a hacker stealing a user’s identity and password can also fake the user’s device. Confirming the device’s identity—whether it is or it isn’t the same device—hence becomes necessary. The best way to ensure the device is secure and trustworthy is to employ the device identity that is designed by its manufacturer and programmed into its TPM or Secure Enclave chip. ... Trusted actions are critical in ensuring a secure and pervasive trust environment. Different actions require different levels of authentication, generating different levels of trust, which the application vendor or the service provider has already defined. An action considered high risk would require stronger authentication, also known as dynamic authentication.


AWS clamping down on cloud capacity swapping; here’s what IT buyers need to know

For enterprises that sourced discounted cloud resources through a broker or value-added reseller (VAR), the arbitrage window shuts, Brunkard noted. Enterprises should expect a “modest price bump” on steady‑state workloads and a “brief scramble” to unwind pooled commitments. ... On the other hand, companies that buy their own RIs or SPs, or negotiate volume deals through AWS’s Enterprise Discount Program (EDP), shouldn’t be impacted, he said. Nothing changes except that pricing is now baselined. To get ahead of the change, organizations should audit their exposure and ask their managed service providers (MSPs) what commitments are pooled and when they renew, Brunkard advised. ... Ultimately, enterprises that have relied on vendor flexibility to manage overcommitment could face hits to gross margins, budget overruns, and a spike in “finance-engineering misalignment,” Barrow said. Those whose vendor models are based on RI and SP reallocation tactics will see their risk profile “changed overnight,” he said. New commitments will now essentially be non-cancellable financial obligations, and if cloud usage dips or pivots, they will be exposed. Many vendors won’t be able to offer protection as they have in the past.


The new C-Suite ally: Generative AI

While traditional GenAI applications focus on structured datasets, a significant frontier remains largely untapped — the vast swathes of unstructured "dark data" sitting in contracts, credit memos, regulatory reports, and risk assessments. Aashish Mehta, Founder and CEO of nRoad, emphasizes this critical gap.
"Most strategic decisions rely on data, but the reality is that a lot of that data sits in unstructured formats," he explained. nRoad’s platform, CONVUS, addresses this by transforming unstructured content into structured, contextual insights. ... Beyond risk management, OpsGPT automates time-intensive compliance tasks, offers multilingual capabilities, and eliminates the need for coding through intuitive design. Importantly, Broadridge has embedded a robust governance framework around all AI initiatives, ensuring security, regulatory compliance, and transparency. Trustworthiness is central to Broadridge’s approach. "We adopt a multi-layered governance framework grounded in data protection, informed consent, model accuracy, and regulatory compliance," Seshagiri explained. ... Despite the enthusiasm, CxOs remain cautious about overreliance on GenAI outputs. Concerns around model bias, data hallucination, and explainability persist. Many leaders are putting guardrails in place: enforcing human-in-the-loop systems, regular model audits, and ethical AI use policies.


Building a Proactive Defence Through Industry Collaboration

Trusted collaboration, whether through Information Sharing and Analysis Centres (ISACs), government agencies, or private-sector partnerships, is a highly effective way to enhance the defensive posture of all participating organisations. For this to work, however, organisations will need to establish operationally secure real-time communication channels that support the rapid sharing of threat and defence intelligence. In parallel, the community will also need to establish processes to enable them to efficiently disseminate indicators of compromise (IoCs) and tactics, techniques and procedures (TTPs), backed up with best practice information and incident reports. These collective defence communities can also leverage the centralised cyber fusion centre model that brings together all relevant security functions – threat intelligence, security automation, threat response, security orchestration and incident response – in a truly cohesive way. Providing an integrated sharing platform for exchanging information among multiple security functions, today’s next-generation cyber fusion centres enable organisations to leverage threat intelligence, identify threats in real-time, and take advantage of automated intelligence sharing within and beyond organisational boundaries. 


3 Powerful Ways AI is Supercharging Cloud Threat Detection

AI’s strength lies in pattern recognition across vast datasets. By analysing historical and real-time data, AI can differentiate between benign anomalies and true threats, improving the signal-to-noise ratio for security teams. This means fewer false positives and more confidence when an alert does sound. ... When a security incidents strike, every second counts. Historically, responding to an incident involves significant human effort – analysts must comb through alerts, correlate logs, identify the root cause, and manually contain the threat. This approach is slow, prone to errors, and doesn’t scale well. It’s not uncommon for incident investigations to stretch hours or days when done manually. Meanwhile, the damage (data theft, service disruption) continues to accrue. Human responders also face cognitive overloads during crises, juggling tasks like notifying stakeholders, documenting events, and actually fixing the problem. ... It’s important to note that AI isn’t about eliminating the need for human experts but rather augmenting their capabilities. By taking over initial investigation steps and mundane tasks, AI frees up human analysts to focus on strategic decision-making and complex threats. Security teams can then spend time on thorough analysis of significant incidents, threat hunting, and improving security posture, instead of constant firefighting. 


The hidden gaps in your asset inventory, and how to close them

The biggest blind spot isn’t a specific asset. It is trusting that what’s on paper is actually live and in production. Many organizations often solely focus on known assets within their documented environments, but this can create a false sense of security. Blind spots are not always the result of malicious intent, but rather of decentralized decision-making, forgotten infrastructure, or evolving technology that hasn’t been brought under central control. External applications, legacy technologies and abandoned cloud infrastructure, such as temporary test environments, may remain vulnerable long after their intended use. These assets pose a risk, particularly when they are unintentionally exposed due to misconfiguration or overly broad permissions. Third-party and supply chain integrations present another layer of complexity.  ... Traditional discovery often misses anything that doesn’t leave a clear, traceable footprint inside the network perimeter. That includes subdomains spun up during campaigns or product launches; public-facing APIs without formal registration or change control; third-party login portals or assets tied to your brand and code repositories, or misconfigured services exposed via DNS. These assets live on the edge, connected to the organization but not owned in a traditional sense. 

Daily Tech Digest - May 14, 2025


Quote for the day:

"Success is what happens after you have survived all of your mistakes." -- Anonymous


3 Stages of Building Self-Healing IT Systems With Multiagent AI

Multiagent AI systems can allow significant improvements to existing processes across the operations management lifecycle. From intelligent ticketing and triage to autonomous debugging and proactive infrastructure maintenance, these systems can pave the way for IT environments that are largely self-healing. ... When an incident is detected, AI agents can attempt to debug issues with known fixes using past incident information. When multiple agents are combined within a network, they can work out alternative solutions if the initial remediation effort doesn’t work, while communicating the ongoing process with engineers. Keeping a human in the loop (HITL) is vital to verifying the outputs of an AI model, but agents must be trusted to work autonomously within a system to identify fixes and then report these back to engineers. ... The most important step in creating a self-healing system is training AI agents to be able to learn from each incident, as well as from each other, to become truly autonomous. For this to happen, AI agents cannot be siloed into incident response. Instead, they must be incorporated into an organization’s wider system, communicate with third-party agents and allow them to draw correlations from each action taken to resolve each incident. In this way, each organization’s incident history becomes the training data for its AI agents, ensuring that the actions they take are organization-specific and relevant.


The three refactorings every developer needs most

If I had to rely on only one refactoring, it would be Extract Method, because it is the best weapon against creating a big ball of mud. The single best thing you can do for your code is to never let methods get bigger than 10 or 15 lines. The mess created when you have nested if statements with big chunks of code in between the curly braces is almost always ripe for extracting methods. One could even make the case that an if statement should have only a single method call within it. ... It’s a common motif that naming things is hard. It’s common because it is true. We all know it. We all struggle to name things well, and we all read legacy code with badly named variables, methods, and classes. Often, you name something and you know what the subtleties are, but the next person that comes along does not. Sometimes you name something, and it changes meaning as things develop. But let’s be honest, we are going too fast most of the time and as a result we name things badly. ... In other words, we pass a function result directly into another function as part of a boolean expression. This is… problematic. First, it’s hard to read. You have to stop and think about all the steps. Second, and more importantly, it is hard to debug. If you set a breakpoint on that line, it is hard to know where the code is going to go next.


ENISA launches EU Vulnerability Database to strengthen cybersecurity under NIS2 Directive, boost cyber resilience

The EU Vulnerability Database is publicly accessible and serves various stakeholders, including the general public seeking information on vulnerabilities affecting IT products and services, suppliers of network and information systems, and organizations that rely on those systems and services. ... To meet the requirements of the NIS2 Directive, ENISA initiated a cooperation with different EU and international organisations, including MITRE’s CVE Programme. ENISA is in contact with MITRE to understand the impact and next steps following the announcement of the funding to the Common Vulnerabilities and Exposures Program. CVE data, data provided by Information and Communication Technology (ICT) vendors disclosing vulnerability information through advisories, and relevant information, such as CISA’s Known Exploited Vulnerability Catalogue, are automatically transferred into the EU Vulnerability Database. This will also be achieved with the support of member states, who established national Coordinated Vulnerability Disclosure (CVD) policies and designated one of their CSIRTs as the coordinator, ultimately making the EUVD a trusted source for enhanced situational awareness in the EU. 


Welcome to the age of paranoia as deepfakes and scams abound

Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a time stamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off. ... Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their résumé, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details. Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to show their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings.


CEOs Sound Alarm: C-Suite Behind in AI Savviness

According to the survey, CEOs now see upskilling internal teams as the cornerstone of AI strategy. The top two limiting factors impacting AI's deployment and use, they said, are the inability to hire adequate numbers of skilled people and to calculate value or outcomes. "CEOs have shifted their view of AI from just a tool to a transformative way of working," said Jennifer Carter, senior principal analyst at Gartner. Contrary to the CEOs' assessments by Gartner, most CIOs view themselves as the key drivers and leaders of their organizations' AI strategies. According to a recent report by CIO.com, 80% of CIOs said they are responsible for researching and evaluating AI products, positioning them as "central figures in their organizations' AI strategies." As CEOs increasingly prioritize AI, customer experience and digital transformation, these agenda items are directly shaping the evolving role and responsibilities of the CIO. But 66% of CEOs say their business models are not fit for AI purposes. Billions continue to be spent on enterprisewide AI use cases but little has come in way of returns. Gartner's forecast predicts a 76.4% surge in worldwide spending on gen AI by 2024, fueled by better foundational models and a global quest for AI-powered everything. But organizations are yet to see consistent results despite the surge in investment. 


Dropping the SBOM, why software supply chains are too flaky

“Mounting software supply chain risk is driving organisations to take action. [There is a] 200% increase in organistions making software supply chain security a top priority and growing use of SBOMs,” said Josh Bressers, vice president of security at Anchore. ... “There’s a clear disconnect between security goals and real-world implementation. Since open source code is the backbone of today’s software supply chains, any weakness in dependencies or artifacts can create widespread risk. To effectively reduce these risks, security measures need to be built into the core of artifact management processes, ensuring constant and proactive protection,” said Douglas. If we take anything from these market analysis pieces, it may be true that organisations struggle to balance the demands of delivering software at speed while addressing security vulnerabilities to a level which is commensurate with the composable interconnectedness of modern cloud-native applications in the Kubernetes universe. ... Alan Carson, Cloudsmith’s CSO and co-founder, remarked, “Without visibility, you can’t control your software supply chain… and without control, there’s no security. When we speak to enterprises, security is high up on their list of most urgent priorities. But security doesn’t have to come at the cost of speed. ...”


Does agentic AI spell doom for SaaS?

The reason agentic AI is perceived as a threat to SaaS and not traditional apps is that traditional apps have all but disappeared, replaced in favor of on-demand versions of former client software. But it goes beyond that. AI is considered a potential threat to SaaS for several reasons, mostly because of how it changes who is in control and how software is used. Agentic AI changes how work gets done because agents act on behalf of users, performing tasks across software platforms. If users no longer need to open and use SaaS apps directly because the agents are doing it for them, those apps lose their engagement and perceived usefulness. That ultimately translates into lost revenue, since SaaS apps typically charge either per user or by usage. An advanced AI agent can automate the workflows of an entire department, which may be covered by multiple SaaS products. So instead of all those subscriptions, you just use an agent to do it all. That can lead to significant savings in software costs. On top of the cost savings are time savings. Jeremiah Stone, CTO with enterprise integration platform vendor SnapLogic, said agents have resulted in a 90% reduction in time for data entry and reporting into the company’s Salesforce system. 


Ask a CIO Recruiter: Where Is the ‘I’ in the Modern CIO Role?

First, there are obviously huge opportunities AI can provide the business, whether it’s cost optimization or efficiencies, so there is a lot of pressure from boards and sometimes CEOs themselves saying ‘what are we doing in AI?’ The second side is that there are significant opportunities AI can enable the business in decision-making. The third leg is that AI is not fully leveraged today; it’s not in a very easy-to-use space. That is coming, and CIOs need to be able to prepare the organization for that change. CIOs need to prepare their teams, as well as business users, and say ‘hey, this is coming, we’ve already experimented with a few things. There are a lot of use cases applied in certain industries; how are we prepared for that?’ ... Just having that vision to see where technology is going and trying to stay ahead of it is important. Not necessarily chasing the shiny new toy,, new technology, but just being ahead of it is the most important skill set. Look around the corner and prepare the organization for the change that will come. Also, if you retrained some of the people, you have to be more analytical, more business minded. Those are good skills. That’s not easy to find. A lot of people [who] move into the CIO role are very technical, whether it is coding or heavily on the infrastructure side. That is a commodity today; you need to be beyond that.


Insider risk management needs a human strategy

A technical-only response to insider risk can miss the mark, we need to understand the human side. That means paying attention to patterns, motivations, and culture. Over-monitoring without context can drive good people away and increase risk instead of reducing it. When it comes to workplace monitoring, clarity and openness matter. “Transparency starts with intentional communication,” said Itai Schwartz, CTO of MIND. That means being upfront with employees, not just that monitoring is happening, but what’s being monitored, why it matters, and how it helps protect both the company and its people. According to Schwartz, organizations often gain employee support when they clearly connect monitoring to security, rather than surveillance. “Employees deserve to know that monitoring is about securing data – not surveilling individuals,” he said. If people can see how it benefits them and the business, they’re more likely to support it. Being specific is key. Schwartz advises clearly outlining what kinds of activities, data, or systems are being watched, and explaining how alerts are triggered. ... Ethical monitoring also means drawing boundaries. Schwartz emphasized the importance of proportionality: collecting only what’s relevant and necessary. “Allow employees to understand how their behavior impacts risk, and use that information to guide, not punish,” he said.


Sharing Intelligence Beyond CTI Teams, Across Wider Functions and Departments

As companies’ digital footprints expand exponentially, so too do their attack surfaces. And since most phishing attacks can be carried out by even the least sophisticated hackers due to the prevalence of phishing kits sold in cybercrime forums, it has never been harder for security teams to plug all the holes, let alone other departments who might be undertaking online initiatives which leave them vulnerable. CTI, digital brand protection and other cyber risk initiatives shouldn’t only be utilized by security and cyber teams. Think about legal teams, looking to protect IP and brand identities, marketing teams looking to drive website traffic or demand generation campaigns. They might need to implement digital brand protection to safeguard their organization’s online presence against threats like phishing websites, spoofed domains, malicious mobile apps, social engineering, and malware. In fact, deepfakes targeting customers and employees now rank as the most frequently observed threat by banks, according to Accenture’s Cyber Threat Intelligence Research. For example, there have even been instances where hackers are tricking large language models into creating malware that can be used to hack customers’ passwords.