Showing posts with label social engineering. Show all posts
Showing posts with label social engineering. Show all posts

Daily Tech Digest - January 31, 2026


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart



Security work keeps expanding, even with AI in the mix

Teams with established policies report greater confidence that AI outputs pass through review steps or guardrails before influencing decisions. Governance work spans data handling, access management, auditability, and lifecycle oversight for AI models and integrations. Security and compliance considerations also affect how quickly teams operationalize automation. Concerns around data protection, regulatory obligations, tool integration, and staff readiness continue to influence adoption patterns. Budget limits and legacy systems remain common constraints, reinforcing the need for governance structures that support day-to-day execution. ... Teams managing large tool inventories report higher strain, particularly when workflows require frequent context switching. Leaders increasingly view automation and tooling improvements as key levers for retaining staff. Practitioners consistently place work-life balance and meaningful impact at the center of retention decisions. ... Many teams express interest in workflow platforms that connect automation, AI, and human review within a single operational layer. These approaches focus on moving work across systems without constant manual handoffs. Respondents associate connected workflows with higher productivity, faster response times, improved data accuracy, and stronger compliance tracking. Interoperability also plays a growing role. Security teams increasingly consider standardized frameworks and APIs that allow AI systems to interact with tools under controlled conditions. 


Human risk management: CISOs’ solution to the security awareness training paradox

Despite regulatory compliance requirements and significant investment, SAT seems to deliver marginal benefits. Clearly, SAT is broken — even with peripheral improvements like synthetic phishing tools. So, what’s needed? Over the next few years, organizations should shift from static/sporadic security training to an emerging discipline called human risk management (HRM). ... HRM is defined as a cybersecurity strategy that identifies, measures, and reduces the risks caused by human behavior. Simply stated, security awareness training is about what employees know; HRM is about what they do. To be more specific, HRM integrates into email security tools, web gateways, and identity and access management (IAM) systems to identify human vulnerabilities. Furthermore, it measures risk using behavioral data and pinpoints an organization’s riskiest users. HRM then seeks to mitigate these risks by applying targeted interventions such as micro-learning, simulations, or automated security controls. Finally, HRM monitors behavioral changes so organizations can track progress. ... From an ROI perspective, HRM offers a much more granular approach to cyber-risk mitigation than standard SAT. CISOs and HR managers can report on improved cyber hygiene and behavior, rather than how many employees have been trained and past generic tests. Repeat offenders are not only identified but also provided with personalized training tools and attention. Ultimately, HRM makes it possible to show a direct correlation between training and a reduction in actual security incidents. ...


The Human Exploit: Why Wizer Is the Secret Weapon in the War for Your Digital Soul

We are currently witnessing a systemic failure in how we prepare people for a digital world. From the moment a child gets their first school-issued tablet to the day a retiree checks their pension balance, every individual is a target. This isn’t just a corporate problem; it’s a societal one. That is why I’ve been following the rise of Wizer, a firm that has cracked the code on making security training not just tolerable, but actually effective. ... It is no coincidence that the financial industry has become Wizer’s most aggressive adopter. In banking, trust is the only product you’re actually selling. If a customer’s account is drained because an employee fell for a “vishing” attack—where a hacker samples an IT person’s voice from a voicemail to impersonate them—the damage to the brand is catastrophic. Financial institutions are currently the biggest fans of the platform because they operate under a microscope of regulation and extreme risk. They realized early on that a 45-minute annual compliance video is a waste of time. Wizer’s approach is different; it feels more like an app—specifically Duolingo—than a corporate lecture. ... One of the most profound insights Gabriel Friedlander brings to the table is the necessity of the “Security Awareness Manager” (SAM). Historically, security training was a secondary task for a stressed-out IT admin who would rather be configuring a server. That is a recipe for failure. To build a true culture of security, you need a dedicated facilitator.


Chinese APTs Hacking Asian Orgs With High-End Malware

A pile of evidence suggests that this campaign was carried out by a Chinese APT, but exactly which is unclear. Chinese threat actors are notorious for sharing tools, techniques, and infrastructure. Trend Micro found that this one — which it currently tracks as Shadow-Void-044 — used a C2 domain previously used by UNC3569. A Cobalt Strike sample on one of its servers was signed with a stolen certificate also spotted in a Bronze University campaign. And they linked one of its backdoors to a backdoor developed by a group called "TheWizards," not to be confused with the equally maligned basketball team. A second, separate threat actor has also been using PeckBirdy since at least July 2024. With low confidence, Trend Micro's report linked the group it labeled Shadow-Earth-045 to the one it tracks as Earth Baxia. This campaign was more diverse in its methods, and its targeting, involving both Asian private organizations and government entities. Chinese APTs habitually perform cyberespionage against government agencies in the APAC region and beyond. Trend Micro tells Dark Reading, "These two campaigns remind us that the boundary between cybercrime and cyberespionage is increasingly blurred. One tool used in different [kinds of] attacks is [becoming] more and more popular."



AI agent evaluations: The hidden cost of deployment

Agent evals can be complicated because they test for several possible metrics, including agent reasoning, execution, data leakage, response tone, privacy, and even moral alignment, according to AI experts. ... Most IT leaders budget for obvious costs — including compute time, API calls, and engineering hours — but miss the cost of human judgment in defining what Ferguson calls the “ground truth.” “When evaluating whether an agent properly handled a customer query or drafted an appropriate response, you need domain experts to manually grade outputs and achieve consensus on what ‘correct’ looks like,” he adds. “This human calibration layer is expensive and often overlooked.” ... The sticker shock of agent evals rarely comes from the compute costs of the agent itself, but from the “non-deterministic multiplier” of testing, adds Chengyu “Cay” Zhang, founding software engineer at voice AI vendor Redcar.ai. He compares training agents to training new employees, with both having moods. “You can’t just test a prompt once; you have to test it 50 times across different scenarios to see if the agent holds up or if it hallucinates,” he says. “Every time you tweak a prompt or swap a model, you aren’t just running one test; you’re rerunning thousands of simulations.” ... If an organization wants to save money, the better alternative is to narrow the agent’s scope, instead of cutting back on testing, Zhang adds. “If you skip the expensive steps — like human review or red-teaming — you’re relying entirely on probability,” he says.


Social Engineering Hackers Target Okta Single Sign On

What makes these attacks unusual is how criminals engage in real-time conversations as part of their trickery, using the latest generation of highly automated phishing toolkits, which enable them to redirect users to real-looking log-in screens as part of a highly orchestrated attack. "This isn't a standard automated spray-and-pray attack; it is a human-led, high-interaction voice phishing - 'vishing' - operation designed to bypass even hardened multifactor authentication setups," said threat intelligence firm Silent Push. The "live phishing panel" tools being used enable "a human attacker to sit in the middle of a login session, intercepting credentials and MFA tokens in real time to gain immediate, persistent access to corporate dashboards," it said. Callers appear to be using scripts designed to walk victims through an attacker-designated list of desired actions. ... At least so far, the campaign appears to center only on Okta-using organizations. ShinyHunters and similar groups have previously targeted a variety of SSO providers, meaning hackers' focus may well expand, Pilling said. The single best defense against live phishing attacks that don't exploit any flaws or vulnerabilities in vendors' software, is strong MFA. "We strongly recommend moving toward phishing-resistant MFA, such as FIDO2 security keys or passkeys where possible, as these protections are resistant to social engineering attacks in ways that push-based or SMS authentication are not," Mandiant's Carmakal said.


AI agents can talk to each other — they just can't think together yet

Current protocols handle the mechanics of agent communication — MCP, A2A, and Outshift's AGNTCY, which it donated to the Linux Foundation, let agents discover tools and exchange messages. But these operate at what Pandey calls the "connectivity and identification layer." They handle syntax, not semantics. The missing piece is shared context and intent. An agent completing a task knows what it's doing and why, but that reasoning isn't transmitted when it hands off to another agent. Each agent interprets goals independently, which means coordination requires constant clarification and learned insights stay siloed. For agents to move from communication to collaboration, they need to share three things, according to Outshift: pattern recognition across datasets, causal relationships between actions, and explicit goal states. "Without shared intent and shared context, AI agents remain semantically isolated. They are capable individually, but goals get interpreted differently; coordination burns cycles, and nothing compounds. One agent learns something valuable, but the rest of the multi-agent-human organization still starts from scratch," Outshift said in a paper. Outshift said the industry needs "open, interoperable, enterprise-grade agentic systems that semantically collaborate" and proposes a new architecture it calls the "Internet of Cognition," where multi-agent environments work within a shared system.


Building Software Organisations Where People Can Thrive

Trust builds over time through small interactions. When people know what to expect and how to interact with each other in tough moments, trust is formed, Card argued. Once trust is embedded, teams are more likely to take risks by putting themselves out there to be wrong and fail fast, and that is where the magic happens. You need to actively address bias and microaggressions. If left unchallenged, they quietly erode trust and belonging. Being proactive, fair, and consistent in addressing these behaviours signals your values clearly to the wider organisation, Card said. At the heart of it all is the belief that people-first leadership is performance leadership, Card said. When we take the time to build inclusive, resilient cultures, success follows, not just for the business, but for everyone within it, he concluded. ... psychological safety is the next level up from a trusting environment. Both are the foundations of any healthy, high-performing culture. Without them, people hold back; they’re less likely to share ideas, admit mistakes, or challenge the status quo. And that means your team won’t grow, innovate, or build strong relationships. If you want to build a culture that lasts, where people thrive, not just survive, then building trust and safety isn’t optional. It has to be intentional. And once it’s in place, it unlocks everything else: collaboration, resilience, accountability, and growth.


Stop Delivering Change, Start Designing a Business That Can Actually Grow

Legacy and emerging technologies sit side by side, often competing for attention and investment. Manual and systemised processes overlap in ways that only make sense to the people living inside them. Long-standing roles carry deep, tacit knowledge, while new-in-career roles arrive with different expectations, skills, and assumptions about how work should flow. Each layer is changing, but rarely in a deliberate, joined-up way. When leaders do not have a shared, design-level understanding of how these layers interact, decisions are made in isolation. ... Programme milestones become a proxy for progress. Technology capability becomes a proxy for readiness. Productivity targets replace understanding. Designing the next-generation business model requires a different kind of insight—one that shows how people, process, data, and technology interact end to end. One that makes visible where human judgement still matters, where automation genuinely adds value, and where the handoffs between the two are quietly breaking down. ... Growth and productivity are not things you add through execution. They are the result of deliberate design choices. A business model fit for today makes explicit decisions about what is standardised and what is differentiated, what is automated and what is augmented, what relies on experience and what demands new capability. Those decisions cannot be delegated to programmes alone. They sit squarely with leadership.


Beyond Human-in-the-Loop: Why Data Governance Must Be a System’s Property

The reliance on human action creates a false sense of control; although governance artifacts do exist, responsibility for accountability exists outside the formal governance system and is therefore difficult to enforce. ... The dominant structures of governance today represent a human-in-the-loop system model. In a human-in-the-loop model, technology is used primarily to automate the completion or execution of specific tasks, such as the movement of data between systems, checking the validity of data between systems, and enhancing data that has been provided or created by other systems. The responsibility for the outcome of an automated governance system is not part of the automated system itself. Therefore, humans have the ability to resolve disputes between systems, approve any exceptions made by systems, and determine what is true when using different systems produces different conclusions. ... As data ecosystems continue to expand, we see recurring patterns of failure emerge. As a result, stewardship teams tend to create bottlenecks in their processes, as the volumes of existing exceptions continue to grow much faster than the capability to resolve those exceptions. The presence of escalation paths creates delays in decision-making processes, leading to inconsistencies in the products or services being delivered. Over time, informal methods of addressing issues become accepted as standard operating procedures. Controls within the organization are bypassed in an effort to deliver projects on time.

Daily Tech Digest - April 21, 2025


Quote for the day:

"In simplest terms, a leader is one who knows where he wants to go, and gets up, and goes." -- John Erksine



Two ways AI hype is worsening the cybersecurity skills crisis

Another critical factor in the AI-skills shortage discussion is that attackers are also leveraging AI, putting defenders at an even greater disadvantage. Cybercriminals are using AI to generate more convincing phishing emails, automate reconnaissance, and develop malware that can evade detection. Meanwhile, security teams are struggling just to keep up. “AI exacerbates what’s already going on at an accelerated pace,” says Rona Spiegel, cyber risk advisor at GroScale and former cloud governance leader at Wells Fargo and Cisco. “In cybersecurity, the defenders have to be right all the time, while attackers only have to be right once. AI is increasing the probability of attackers getting it right more often.” ... “CISOs will have to be more tactical in their approach,” she explains. “There’s so much pressure for them to automate, automate, automate. I think it would be best if they could partner cross-functionality and focus on things like policy and urge the unification and simplification of how polices are adapted… and make sure how we’re educating the entire environment, the entire workforce, not just the cybersecurity.” Appayanna echoes this sentiment, arguing that when used correctly, AI can ease talent shortages rather than exacerbate them. 


Data mesh vs. data fabric vs. data virtualization: There’s a difference

“Data mesh is a decentralized model for data, where domain experts like product engineers or LLM specialists control and manage their own data,” says Ahsan Farooqi, global head of data and analytics, Orion Innovation. While data mesh is tied to certain underlying technologies, it’s really a shift in thinking more than anything else. In an organization that has embraced data mesh architecture, domain-specific data is treated as a product owned by the teams relevant to those domains. ... As Matt Williams, field CTO at Cornelis Networks, puts it, “Data fabric is an architecture and set of data services that provides intelligent, real-time access to data — regardless of where it lives — across on-prem, cloud, hybrid, and edge environments. This is the architecture of choice for large data centers across multiple applications.” ... Data virtualization is the secret sauce that can make that happen. “Data virtualization is a technology layer that allows you to create a unified view of data across multiple systems and allows the user to access, query, and analyze data without physically moving or copying it,” says Williams. That means you don’ t have to worry about reconciling different data stores or working with data that’s outdated. Data fabric uses data virtualization to produce that single pane of glass: It allows the user to see data as a unified set, even if that’s not the underlying physical reality.


Biometrics adoption strategies benefit when government direction is clear

Part of the problem seems to be the collision of private and public sector interests in digital ID use cases like right-to-work checks. They would fall outside the original conception of Gov.uk as a system exclusively for public sector interaction, but the business benefit they provide is strictly one of compliance. The UK government’s Office for Digital Identities and Attributes (OfDIA), meanwhile, brought the register of digital identity and attribute services to the public beta stage earlier this month. The register lists services certified to the digital identity and attributes trust framework to perform such compliance checks, and the recent addition of Gov.uk One Login provided the spark for the current industry conflagration. Age checks for access to online pornography in France now require a “double-blind” architecture to protect user privacy. The additional complexity still leaves clear roles, however, which VerifyMy and IDxLAB have partnered to fill. Yoti has signed up a French pay site, but at least one big international player would rather fight the age assurance rules in court. Aviation and border management is one area where the enforcement of regulations has benefited from private sector innovation. Preparation for Digital Travel Credentials is underway with Amadeus pitching its “journey pass” as a way to use biometrics at each touchpoint as part of a reimagined traveller experience. 



Will AI replace software engineers? It depends on who you ask

Effective software development requires "deep collaboration with other stakeholders, including researchers, designers, and product managers, who are all giving input, often in real time," said Callery-Colyne. "Dialogues around nuanced product and user information will occur, and that context must be infused into creating better code, which is something AI simply cannot do." The area where AIs and agents have been successful so far, "is that they don't work with customers directly, but instead assist the most expensive part of any IT, the programmers and software engineers," Thurai pointed out. "While the accuracy has improved over the years, Gen AI is still not 100% accurate. But based on my conversations with many enterprise developers, the technology cuts down coding time tremendously. This is especially true for junior to mid-senior level developers." AI software agents may be most helpful "when developers are racing against time during a major incident, to roll out a fixed code quickly, and have the systems back up and running," Thurai added. "But if the code is deployed in production as is, then it adds to tech debt and could eventually make the situation worse over the years, many incidents later."


Protected NHIs: Key to Cyber Resilience

We live where cyber threats is continually evolving. Cyber attackers are getting smarter and more sophisticated with their techniques. Traditional security measures no longer suffice. NHIs can be the critical game-changer that organizations have been looking for. So, why is this the case? Well, cyber attackers, in the current times, are not just targeting humans but machines as well. Remember that your IT includes computing resources like servers, applications, and services that all represent potential points of attack. Non-Human Identities have bridged the gap between human identities and machine identities, providing an added layer of protection. NHIs security is of utmost importance as these identities can have overarching permissions. One single mishap with an NHI can lead to severe consequences. ... Businesses are significantly relying on cloud-based services for a wide range of purposes, from storage solutions to sophisticated applications. That said, the increasing dependency on the cloud has elucidated the pressing need for more robust and sophisticated security protocols. An NHI management strategy substantially supports this quest for fortified cloud security. By integrating with your cloud services, NHIs ensure secured access, moderated control, and streamlined data exchanges, all of which are instrumental in the prevention of unauthorized accesses and data violations.


Job seekers using genAI to fake skills and credentials

“We’re seeing this a lot with our tech hires, and a lot of the sentence structure and overuse of buzzwords is making it super obvious,” said Joel Wolfe, president of HiredSupport, a California-based business process outsourcing (BPO) company. HiredSupport has more than 100 corporate clients globally, including companies in the eCommerce, SaaS, healthcare, and fintech sectors. Wolfe, who weighed in on the topic on LinkedIn, said he’s seeing AI-enhanced resumes “across all roles and positions, but most obvious in overembellished developer roles.” ... In general, employers generally say they don’t have a problem with applicants using genAI tools to write a resume, as long as it accurately represents a candidate’s qualifications and experience. ZipRecruiter, an online employment marketplace, said 67% of 800 employers surveyed reported they are open to candidates using genAI to help write their resumes, cover letters, and applications, according to its Q4 2024 Employer Report. Companies, however, face a growing threat from fake job seekers using AI to forge IDs, resumes, and interview responses. By 2028, a quarter of job candidates could be fake, according to Gartner Research. Once hired, impostors can then steal data, money, or install ransomware. ... Another downside to the growing flood of AI deep fake applicants is that it affects “real” job applicants’ chances of being hired.


How Will the Role of Chief AI Officer Evolve in 2025?

For now, the role is less about exploring the possibilities of AI and more about delivering on its immediate, concrete value. “This year, the role of the chief AI officer will shift from piloting AI initiatives to operationalizing AI at scale across the organization,” says Agarwal. And as for those potential upheavals down the road? CAIO officers will no doubt have to be nimble, but Martell doesn’t see their fundamental responsibilities changing. “You still have to gather the data within your company to be able to use with that model and then you still have to evaluate whether or not that model that you built is delivering against your business goals. That has never changed,” says Martell. ... AI is at the inflection point between hype and strategic value. “I think there's going to be a ton of pressure to find the right use cases and deploy AI at scale to make sure that we're getting companies to value,” says Foss. CAIOs could feel that pressure keenly this year as boards and other executive leaders increasingly ask to see ROI on massive AI investments. “Companies who have set these roles up appropriately, and more importantly the underlying work correctly, will see the ROI measurements, and I don't think that chief AI officers [at those] organizations should feel any pressure,” says Mohindra.


Cybercriminals blend AI and social engineering to bypass detection

With improved attack strategies, bad actors have compressed the average time from initial access to full control of a domain environment to less than two hours. Similarly, while a couple of years ago it would take a few days for attackers to deploy ransomware, it’s now being detonated in under a day and even in as few as six hours. With such short timeframes between the attack and the exfiltration of data, companies are simply not prepared. Historically, attackers avoided breaching “sensitive” industries like healthcare, utilities, and critical infrastructures because of the direct impact to people’s lives.  ... Going forward, companies will have to reconcile the benefits of AI with its many risks. Implementing AI solutions expands a company’s attack surface and increases the risk of data getting leaked or stolen by attackers or third parties. Threat actors are using AI efficiently, to the point where any AI employee training you may have conducted is already outdated. AI has allowed attackers to bypass all the usual red flags you’re taught to look for, like grammatical errors, misspelled words, non-regional speech or writing, and a lack of context to your organization. Adversaries have refined their techniques, blending social engineering with AI and automation to evade detection. 


AI in Cybersecurity: Protecting Against Evolving Digital Threats

As much as AI bolsters cybersecurity defenses, it also enhances the tools available to attackers. AI-powered malware, for example, can adapt its behavior in real time to evade detection. Similarly, AI enables cybercriminals to craft phishing schemes that mimic legitimate communications with uncanny accuracy, increasing the likelihood of success. Another alarming trend is the use of AI to automate reconnaissance. Cybercriminals can scan networks and systems for vulnerabilities more efficiently than ever before, highlighting the necessity for cybersecurity teams to anticipate and counteract AI-enabled threats. ... The integration of AI into cybersecurity raises ethical questions that must be addressed. Privacy concerns are at the forefront, as AI systems often rely on extensive data collection. This creates potential risks for mishandling or misuse of sensitive information. Additionally, AI’s capabilities for surveillance can lead to overreach. Governments and corporations may deploy AI tools for monitoring activities under the guise of security, potentially infringing on individual rights. There is also the risk of malicious actors repurposing legitimate AI tools for nefarious purposes. Clear guidelines and robust governance are crucial to ensuring responsible AI deployment in cybersecurity.


AI workloads set to transform enterprise networks

As AI companies leapfrog each other in terms of capabilities, they will be able to handle even larger conversations — and agentic AI may increase the bandwidth requirements exponentially and in unpredictable ways. Any website or app could become an AI app, simply by adding an AI-powered chatbot to it, says F5’s MacVittie. When that happens, a well-defined, structured traffic pattern will suddenly start looking very different. “When you put the conversational interfaces in front, that changes how that flow actually happens,” she says. Another AI-related challenge that networking managers will need to address is that of multi-cloud complexity. ... AI brings in a whole host of potential security problems for enterprises. The technology is new and unproven, and attackers are quickly developing new techniques for attacking AI systems and their components. That’s on top of all the traditional attack vectors, says Rich Campagna, senior vice president of product management at Palo Alto Networks. At the edge, devices and networks are often distributed which leads to visibility blind spots,” he adds. That makes it harder to fix problems if something goes wrong. Palo Alto is developing its own AI applications, Campagna says, and has been for years. And so are its customers. 


Daily Tech Digest - April 18, 2025


Quote for the day:

“Failures are finger posts on the road to achievement.” -- C.S. Lewis



How to Use Passive DNS To Trace Hackers Command And Control Infrastructure

This technology works through a network of sensors that monitor DNS query-response pairs, forwarding this information to central collection points for analysis without disrupting normal network operations. The resulting historical databases contain billions of unique records that security analysts can query to understand how domain names have resolved over time. ... When investigating potential threats, analysts can review months or even years of DNS resolution data without alerting adversaries to their investigation—a critical advantage when dealing with sophisticated threat actors. ... The true power of passive DNS in C2 investigation comes through various pivoting techniques that allow analysts to expand from a single indicator to map entire attack infrastructures. These techniques leverage the interconnected nature of DNS to reveal relationships between seemingly disparate domains and IP addresses. IP-based pivoting represents one of the most effective approaches. Starting with a known malicious IP address, analysts can query passive DNS to identify all domains that have historically resolved to that address. This technique often reveals additional malicious domains that share infrastructure but might otherwise appear unrelated.


Why digital identity is the cornerstone of trust in modern business

The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect. ... Digital identity is also a driver of customer experience. In today’s hyper-competitive digital landscape, the sign-up process can make or break a brand relationship. Clunky login screens or repeated verification prompts are quick ways to lose a customer. ... The foundation of digital trust is identity. It is no longer sufficient to treat identity management as a backend IT concern. Enterprises must now embed identity solutions into every digital touchpoint, ensuring that user interactions – whether by customers, employees, or partners – are both frictionless and secure. Modern enterprises must shift from fragmented, legacy systems to a unified identity platform. This evolution allows organisations to scale securely, eliminate redundancies and deliver the streamlined experiences users now expect.


Is your business ready for the IDP revolution?

AI-powered document processing offers significant advantages. Using advanced ML, IDP systems accurately interpret even complex and low-quality documents, including those with intricate tables and varying formats. This reduces manual work and the risk of human error. ... IDP also significantly improves data quality and accuracy by eliminating manual data entry, ensuring critical information is captured correctly and consistently. This leads to better decision-making, regulatory compliance and increased efficiency. IDP has wide-ranging applications. In healthcare, it speeds up claims processing and improves patient data management. In finance, it automates invoice processing and streamlines loan applications. In legal, it assists with contract analysis and due diligence. And in insurance, IDP automates information extraction from claims and reports, accelerating processing and boosting customer satisfaction. One specific example of this innovation in action is DocuWare’s own Intelligent Document Processing (DocuWare IDP). Our AI-powered solution streamlines how businesses handle even the most complex documents. Available as a standalone product, in the DocuWare Cloud or on-premises, DocuWare IDP automates text recognition, document classification and data extraction from various document types, including invoices, contracts and ID cards.


Practical Strategies to Overcome Cyber Security Compliance Standards Fatigue

The suitability of a cyber security framework must be determined based on applicable laws, industry standards, organizational risk profile, business goals, and resource constraints. It goes without saying that organizations providing critical services to the USA federal government will pursue NIST compliance while Small and Medium-sized Enterprises (SMEs) may want to focus on CIS Top 20, given resource constraints. Once the cyber security team has selected the most suitable framework, they should seek endorsement from the executive team or cyber risk governance committee to ensure shared sense of purpose. ... Mapping will enable organizations to identify overlapping controls to create a unified control set that addresses the requirements of multiple frameworks. This way, the organization can avoid redundant controls and processes, which in turn reduces cyber security team fatigue, accelerates innovation and lowers the cost of security. ... Cyber compliance standards play an integral role to ensure organizations prioritize the protection of consumer confidential and sensitive information above profits. But to reduce pressure on cyber teams already battling stress, cyber leaders must take a pragmatic approach that carefully balances compliance with innovation, agility and efficiency.


The Elaboration of a Modern TOGAF Architecture Maturity Model

This innovative TOGAF architecture maturity model provides a structured framework for assessing and enhancing an organization’s enterprise architecture capabilities in organizations that need to become more agile. By defining maturity levels across ten critical domains, the model enables organizations to transition from unstructured, reactive practices to well-governed, data-driven, and continuously optimized architectural processes. The five maturity levels—Initial, Under Development, Defined, Managed, and Measured—offer a clear roadmap for organizations to integrate EA into strategic decision-making, align business and IT investments, and establish governance frameworks that enhance operational efficiency. Through this approach, EA evolves from a support function into a key driver of innovation and business transformation. This model emphasizes continuous improvement and strategic alignment, ensuring that EA not only supports but actively contributes to an organization’s long-term success. By embedding EA into business strategy, security, governance, and solution delivery, enterprises can enhance agility, mitigate risks, and drive competitive advantage. Measuring EA’s impact through financial metrics and performance indicators further ensures that architecture initiatives provide tangible business value. 


Securing digital products under the Cyber Resilience Act

CRA explicitly states that products should have appropriate level of cybersecurity based on the risks, the risk based approach is fundamental in the regulation. This has the advantage that we can set the bar wherever we want as long as we make a good risk based argumentation for this level. This implies that we must have a methodical categorization of risk, hence we need application risk profiles. In order to implement this we can follow the quality criteria of maturity level 1, 2 and 3 of the application risk profiles practice. This includes having a clearly agreed upon, understood, accessible and updated risk classification system. ... Many companies already have SAMM assessments, if you do not have SAMM assessments but use another maturity framework such as OWASP DSOMM or NIST CSF you could use the available mappings to accelerate the translation to SAMM. Otherwise we recommend doing SAMM assessments and identifying the gaps in the processes needed. Then deciding on a roadmap to develop the processes and capabilities in time. ... In CRA we need to demonstrate that we have adequate security processes in place, and that we do not ship products with known vulnerabilities. So apart from having a good picture of the data flows we need to have a good picture of the processes in place.


Insider Threats, AI and Social Engineering: The Triad of Modern Cybersecurity Threats

Insiders who are targeted or influenced by external adversaries to commit data theft may not be addressed by traditional security solutions because attackers might use a combination of manipulation techniques with tactics to get access to the confidential data of an organization.  This can be seen in the case of Insider Threats carried out by Famous Chollima, a cyber-criminal group that targeted organizations through the employees, that were working for the criminal group. This criminal group collected individuals, falsified their identities, and helped them secure employment with the organization. Once inside, the group got access to sensitive information through the employees they helped get into the organization. ... Since AI can mimic user behavior, it is hard for security teams to detect the difference between normal activity and AI-generated activity. AI can also be used by insiders to assist in their plans, such as like an insider could use AI or train AI models to analyze user activity and pinpoint the window of least activity to deploy malware onto a critical system at an optimal time and disguise this activity under a legitimate action, to avoid detection with monitoring solutions.


How Successful Leaders Get More Done in Less Time

In order to be successful, leaders must make a conscious shift to move from reactive to intentional. They must guard their calendars, build in time for deep work, and set clear boundaries to focus on what truly drives progress. ... Time-blocking is one of the simplest, most powerful tools a leader can use. At its core, time-blocking is the practice of assigning specific blocks of time to different types of work: deep focus, meetings, admin, creative thinking or even rest. Why does it work? Because it eliminates context-switching, which is the silent killer of productivity. Instead of bouncing between tasks and losing momentum, time-blocking gives your day structure. It creates rhythm and ensures that what matters most actually gets done. ... Not everything on your to-do list matters. But without a clear system to prioritize, everything feels urgent. That's how leaders end up spending hours on reactive work while their most impactful tasks get pushed to "tomorrow." The fix? Use prioritization frameworks like the 80/20 rule (20% of tasks drive 80% of results) to stay focused on what actually moves the needle. ... If you're still doing everything yourself, there's a chance you're creating a bottleneck. The best leaders know that delegation buys back time and creates opportunities for others to grow. 


The tech backbone creating the future of infrastructure

Governments and administrators around the world are rapidly realizing the benefits of integrated infrastructure. A prime example is the growing trend for connecting utilities across borders to streamline operations and enhance efficiency. The Federal-State Modern Grid Deployment Initiative, involving 21 US states, is a major step towards modernizing the power grid, boosting reliability and enhancing resource management. Across the Atlantic, the EU is linking energy systems; by 2030, each member nation should be sharing at least 15% of its electricity production with its neighbors. On a smaller scale, the World Economic Forum is encouraging industrial clusters—including in China, Indonesia, Ohio and Australia—to share resources, infrastructure and risks to maximize economic and environmental value en route to net zero. ... Data is a nation’s most valuable asset. It is now being collected from multiple infrastructure points—traffic, energy grids, utilities. Infusing it with artificial intelligence (AI) in the cloud enables businesses to optimize their operations in real time. Centralizing this information, such as in an integrated command-and-control center, facilitates smoother collaboration and closer interaction among different sectors. 


No matter how advanced the technology is, it can all fall apart without strong security

One cybersecurity trend that truly excites me is the convergence of Artificial Intelligence (AI) with cybersecurity, especially in the areas of threat detection, incident response, and predictive risk management. This has motivated me to pursue a PhD in Cybersecurity using AI. Unlike traditional rule-based systems, AI is revolutionising cybersecurity by enabling proactive and adaptive defence strategies through contextual intelligence, shifting the focus from reactive to proactive measures. ... The real magic lies in combining AI with human judgement — what I often refer to as “human-in-the-loop cybersecurity.” This balance allows teams to scale faster, stay sharp, and focus on strategic defence instead of chasing every alert manually. What I have learnt from all this is the fusion of AI and cybersecurity is not just an enhancement, it’s a paradigm shift. However, the key is achieving balance. Hence, AI should augment human intelligence, rather than supplant them.... In the realm of financial cybersecurity, the most significant risk isn’t solely technical; it stems from the gap between security measures and business objectives. As the CISO, my responsibility extends beyond merely protecting against threats; I aim to integrate cybersecurity into the core of the organisation, transforming it into a strategic enabler rather than a reactive measure.

Daily Tech Digest - April 16, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


How to lead humans in the age of AI

Quiet the noise around AI and you will find the simple truth that the most crucial workplace capabilities remain deeply human. ... This human skills gap is even more urgent when Gen Z is factored in. They entered the workforce aligned with a shift to remote and hybrid environments, resulting in fewer opportunities to hone interpersonal skills through real-life interactions. This is not a critique of an entire generation, but rather an acknowledgment of a broad workplace challenge. And Gen Z is not alone in needing to strengthen communication across generational divides, but that is a topic for another day. ... Leaders must embrace their inner improviser. Yes, improvisation, like what you have watched on Whose Line Is It Anyway? Or the awkward performance your college roommate invited you to in that obscure college lounge. The skills of an improviser are a proven method for striving amidst uncertainty. Decades of experience at Second City Works and studies published by The Behavioral Scientist confirm the principles of improv equip us to handle change with agility, empathy, and resilience. ... Make listening intentional and visible. Respond with the phrase, “So what I’m hearing is,” followed by paraphrasing what you heard. Pose thoughtful questions that indicate your priority is understanding, not just replying. 


When companies merge, so do their cyber threats

Merging two companies means merging two security cultures. That is often harder than unifying tools or policies. While the technical side of post-M&A integration is important, it’s the human and procedural elements that often introduce the biggest risks. “When CloudSploit was acquired, one of the most underestimated challenges wasn’t technical, it was cultural,” said Josh Rosenthal, Holistic Customer Success Executive at REPlexus.com. “Connecting two companies securely is incredibly complex, even when the acquired company is much smaller.” Too often, the focus in M&A deals lands on surface-level assurances like SOC 2 certifications or recent penetration tests. While important, those are “table stakes,” Rosenthal noted. “They help, but they don’t address the real friction: mismatched security practices, vendor policies, and team behaviors. That’s where M&A cybersecurity risk really lives.” As AI accelerates the speed and scale of attacks, CISOs are under increasing pressure to ensure seamless integration. “Even a phishing attack targeting a vendor onboarding platform can introduce major vulnerabilities during the M&A process,” Rosenthal warned. To stay ahead of these risks, he said, smart security leaders need to dig deeper than documentation.


Measuring success in dataops, data governance, and data security

If you are on a data governance or security team, consider the metrics that CIOs, chief information security officers (CISOs), and chief data officers (CDOs) will consider when prioritizing investments and the types of initiatives to focus on. Amer Deeba, GVP of Proofpoint DSPM Group, says CIOs need to understand what percentage of their data is valuable or sensitive and quantify its importance to the business—whether it supports revenue, compliance, or innovation. “Metrics like time-to-insight, ROI from tools, cost savings from eliminating unused shadow data, or percentage of tools reducing data incidents are all good examples of metrics that tie back to clear value,” says Deeba. ... Dataops technical strategies include data pipelines to move data, data streaming for real-time data sources like IoT, and in-pipeline data quality automations. Using the reliability of water pipelines as an analogy is useful because no one wants pipeline blockages, leaky pipes, pressure drops, or dirty water from their plumbing systems. “The effectiveness of dataops can be measured by tracking the pipeline success-to-failure ratio and the time spent on data preparation,” says Sunil Kalra, practice head of data engineering at LatentView. “Comparing planned deployments with unplanned deployments needed to address issues can also provide insights into process efficiency.”


How Safe Is the Code You Don’t Write? The Risks of Third-Party Software

Open-source and commercial packages and public libraries accelerate innovation, drive down development costs, and have become the invisible scaffolding of the Internet. GitHub recently highlighted that 99% of all software projects use third-party components. But with great reuse comes great risk. Third-party code is a double-edged sword. On the one hand, it’s indispensable. On the other hand, it’s a potential liability. In our race to deliver software faster, we’ve created sprawling software supply chains with thousands of dependencies, many of which receive little scrutiny after the initial deployment. These dependencies often pull in other dependencies, each one potentially introducing outdated, vulnerable, or even malicious code into environments that power business-critical operations. ... The risk is real, so what do we do? We can start by treating third-party code with the same caution and scrutiny we apply to everything else that enters the production pipeline. This includes maintaining a living inventory of all third-party components across every application and monitoring their status to prescreen updates and catch suspicious changes. With so many ways for threats to hide, we can’t take anything on trust, so next comes actively checking for outdated or vulnerable components as well as new vulnerabilities introduced by third-party code. 


The AI Leadership Crisis: Why Chief AI Officers Are Failing (And How To Fix It)

Perhaps the most dangerous challenge facing CAIOs is the profound disconnect between expectations and reality. Many boards anticipate immediate, transformative results from AI initiatives – the digital equivalent of demanding harvest without sowing. AI transformation isn't a sprint; it's a marathon with hurdles. Meaningful implementation requires persistent investment in data infrastructure, skills development, and organizational change management. Yet CAIOs often face arbitrary deadlines that are disconnected from these realities. One manufacturing company I worked with expected their newly appointed CAIO to deliver $50 million in AI-driven cost savings within 12 months. When those unrealistic targets weren't met, support for the role evaporated – despite significant progress in building foundational capabilities. ... There are many potential risks of AI, from bias to privacy concerns, and the right level of governance is essential. CAIOs are typically tasked with ensuring responsible AI use yet frequently lack the authority to enforce guidelines across departments. This accountability-without-authority dilemma places CAIOs in an impossible position. They're responsible for AI ethics and risk management, but departmental leaders can ignore their guidance with minimal consequences.


OT security: how AI is both a threat and a protector

Burying one’s head in the sand, a favorite pastime among some OT personnel, no longer works. Security through obscurity is and remains a bad idea. Heinemeyer: “I’m not saying that everyone will be hacked, but it is increasingly likely these days.” Possibly, the ostrich policy has to do with, yes, the reporting on OT vulnerabilities, including by yours truly. Ancient protocols, ICS systems and PLCs with exploitable vulnerabilities are evidently risk factors. However, the people responsible for maintaining these systems at manufacturing and utility facilities know better than anyone that the actual exploits of these obscure systems are improbable. ... Given the increasing threat, is the new focus on common best practices enough? We have already concluded that vulnerabilities should not be judged solely on the CVSS score. They are an indication, certainly, but a combination of CVEs with middle-of-the-range scoring appears to have the most serious consequences. Heinemeyer says that the resolve to identify all vulnerabilities as the ultimate solution was well established from the 1990s to the 2010s. He says that in recent years, security professionals have realized that specific issues need to be prioritized, quantifying technical exploitability through various measurements (e.g., EPSS). 


In a Social Engineering Showdown: AI Takes Red Teams to the Mat

In a revelation that shouldn’t surprise, but still should alarm security professionals, AI has gotten much more proficient in social engineering. Back in the day, AI was 31% less effective than human beings in creating simulated phishing campaigns. But now, new research from Hoxhunt suggests that the game-changing technology’s phishing performance against elite human red teams has improved by 55%. ... Using AI offensively can raise legal and regulatory hackles related to privacy laws and ethical standards, Soroko adds, as well as creating a dependency risk. “Over-reliance on AI could diminish human expertise and intuition within cybersecurity teams.” But that doesn’t mean bad actors will win the day or get the best of cyber defenders. Instead, security teams could and should turn the tables on them. “The same capabilities that make AI an effective phishing engine can — and must — be used to defend against it,” says Avist. With an emphasis on “must.” ... It seems that tried and true basics are a good place to start. “Ensuring transparency, accountability and responsible use of AI in offensive cybersecurity is crucial,” Kowski. As with any aspect of tech and security, keeping AI models “up-to-date with the latest threat intelligence and attack techniques is also crucial,” he says. “Balancing AI capabilities with human expertise remains a key challenge.”


Optimizing CI/CD for Trust, Observability and Developer Well-Being

While speed is often cited as a key metric for CI/CD pipelines, the quality and actionability of the feedback provided are equally, if not more, important for developers. Jones, emphasizing the need for deep observability, stresses, “Don’t just tell me that the steps of the pipeline succeeded or failed, quantify that success or failure. Show me metrics on test coverage and show me trends and performance-related details. I want to see stack traces when things fail. I want to be able to trace key systems even if they aren’t related to code that I’ve changed because we have large complex architectures that involve a lot of interconnected capabilities that all need to work together.” This level of technical insight empowers developers to understand and resolve issues quickly, highlighting the importance of implementing comprehensive monitoring and logging within your CI/CD pipeline to provide developers with detailed insights into build, test, and deployment processes. And shifting feedback earlier in the development lifecycle serves everyone well. The key is shifting feedback earlier in the process, ensuring it is contextual, before code is merged. For example, running security scans at the pull request stage, rather than after deployment, ensures developers get actionable feedback while still in context. 


AI agents vs. agentic AI: What do enterprises want?

If AI and AI agents are application components, then they fit both into business processes and workflow. A business process is a flow, and these days at least part of that flow is the set of data exchanges among applications or their components—what we typically call a “workflow.” It’s common to think of the process of threading workflows through both applications and workers as a process separate from the applications themselves. Remember the “enterprise service bus”? That’s still what most enterprises prefer for business processes that involve AI. Get an AI agent that does something, give it the output of some prior step, and let it then create output for the step beyond it. The decision as to whether an AI agent is then “autonomous” is really made by whether its output goes to a human for review or is simply accepted and implemented. ... What enterprises like about their vision of an AI agent is that it’s possible to introduce AI into a business process without having AI take over the process or require the process be reshaped to accommodate AI. Tech adoption has long favored strategies that let you limit scope of impact, to control both cost and the level of disruption the technology creates. This favors having AI integrated with current applications, which is why enterprises have always thought of AI improvements to their business operation overall as being linked to incorporating AI into business analytics.


Liquid Cooling is ideal today, essential tomorrow, says HPE CTO

We’re moving from standard consumption levels—like 1 kilowatt per rack—to as high as 3 kilowatts or more. The challenge lies in provisioning that much power and doing it sustainably. Some estimates suggest that data centers, which currently account for about 1% of global power consumption, could rise to 5% if trends continue. This is why sustainability isn’t just a checkbox anymore—it’s a moral imperative. I often ask our customers: Who do you think the world belongs to? Most pause and reflect. My view is that we’re simply renting the world from our grandchildren. That thought should shape how we design infrastructure today. ... Air cooling works until a point. But as components become denser, with more transistors per chip, air struggles. You’d need to run fans faster and use more chilled air to dissipate heat, which is energy-intensive. Liquid, due to its higher thermal conductivity and density, absorbs and transfers heat much more efficiently. Some DLC systems use cold plates only on select components. Others use them across the board. There are hybrid solutions too, combining liquid and air. But full DLC systems, like ours, eliminate the need for fans altogether. ... Direct liquid cooling (DLC) is becoming essential as data centers support AI and HPC workloads that demand high performance and density. 

Daily Tech Digest - February 17, 2025


Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis


Like it or not, AI is learning how to influence you

We need to consider the psychological impact that will occur when we humans start to believe that the AI agents giving us advice are smarter than us on nearly every front. When AI achieves a perceived state of “cognitive supremacy” with respect to the average person, it will likely cause us to blindly accept its guidance rather than using our own critical thinking. This deference to a perceived superior intelligence (whether truly superior or not) will make agent manipulation that much easier to deploy. I am not a fan of overly aggressive regulation, but we need smart, narrow restrictions on AI to avoid superhuman manipulation by conversational agents. Without protections, these agents will convince us to buy things we don’t need, believe things that are untrue and accept things that are not in our best interest. It’s easy to tell yourself you won’t be susceptible, but with AI optimizing every word they say to us, it is likely we will all be outmatched. One solution is to ban AI agents from establishing feedback loops in which they optimize their persuasiveness by analyzing our reactions and repeatedly adjusting their tactics. In addition, AI agents should be required to inform you of their objectives. If their goal is to convince you to buy a car, vote for a politician or pressure your family doctor for a new medication, those objectives should be stated up front.


Leveraging AI for Business Continuity and Disaster Recovery in the Work-From-Home Era

AI-driven tools can monitor the health and performance of hardware and predict hardware failure before it happens using anomaly detection algorithms. For example, if a hard drive is starting to fail or there’s unusual network activity, AI systems can flag the activity/potential problem early and send an email to alert the WFH user or corporate IT staff, allowing businesses to take preventative action. ... AI can detect anomalies in network traffic or access patterns which may indicate a cyberattack (e.g., ransomware, phishing, or data breach). AI-powered cybersecurity tools, such as intrusion detection systems (IDS) and endpoint protection software, can respond automatically to threats by isolating affected systems or rolling back malicious changes. ... Small businesses may not have reliable or frequent data backups or rely on manual processes (e.g., external hard drives) that aren’t automated or secure. It may be difficult to recover without a proper backup strategy if critical data is lost due to hardware failure, cyber-attacks, or natural disasters. ... AI-assisted BC and DR solutions offer a range of benefits, particularly for SOHO and WFH users. These offerings are becoming essential as businesses of all sizes seek to maintain operational resilience in an ever-changing technological landscape. 


GenAI can make us dumber — even while boosting efficiency

“A key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the study found. Overall, workers’ confidence in genAI’s abilities correlates with less effort in critical thinking. The focus of critical thinking shifts from gathering information to verifying it, from problem-solving to integrating AI responses, and from executing tasks to overseeing them. The study suggests that genAI tools should be designed to better support critical thinking by addressing workers’ awareness, motivation, and ability barriers. ... As Agentic AI becomes common, people may come to rely on it for problem-solving — but how will we know it’s doing things correctly, Gold said. People might accept its results without questioning, potentially limiting their own skills development by allowing technology to handle tasks. Lev Tankelevitch, a senior researcher with Microsoft Research, said not all genAI use is bad. He said there’s clear evidence in education that it can enhance critical thinking and learning outcomes. 


How to harness APIs and AI for intelligent automation

APIs are the steady bridges connecting diverse systems and data sources. This reliable technology, which emerged in the 1960s and matured during the noughties ecommerce boom, is bridging today’s next-gen technologies. APIs allow data transfer to be automated, which is essential for training AI models efficiently. Rather than building complex integrations from scratch, they standardize data flow to ensure the data that feeds AI models is accurate and reliable. ... Data preprocessing is the critical step before training any AI model. APIs can ensure that AI applications and models only receive preprocessed data. This minimizes manual errors which smoothes the AI training pipeline. With a direct interface to standardized data, developers can focus on refining the model architecture rather than spending excessive time on data cleanup. Real-time evaluation keeps AI models in check in dynamic environments. By feeding real-time performance data back into the system, developers can quickly adjust parameters to improve the model. ... As your data volumes and transaction rates increase, your APIs must scale accordingly. Performance issues like latency or downtime can disrupt AI training and real-time processing. To be responsive under heavy loads, design APIs with load balancing, caching, and built-in redundancy to maintain consistent performance during peak use. 


Applying Behavioral Economics to Phishing and Social Engineering Attacks

It’s all about deeply and thoroughly understanding human behavior and how these behaviors are impacted by influences that use cognitive biases, emotions, social influences, and contextual factors to drive decisions. Bad actors in the world of cybersecurity also prey upon these human tendencies to drive actions that put organizations at risk. ... Humans are social creatures that trust those they believe are authorities. They’re driven by fear, greed, and curiosity that can cloud their judgement. And they’re prone to cognitive shortcuts—biases that often drive behaviors. Understanding the power of these drivers can help organizations put strategies into place to thwart them. ... Here are some important steps that can help employees make better decisions:Training employees about the threat of cyberattacks, the form these attacks generally take, and their role in helping to avert them is an important first step. Training should be ongoing, not a single instance or once a year event. Phishing simulations have proven to be a very effective way to tangibly reduce security breakdowns. These simulations serve to test employee awareness and identify areas of opportunity for improvement. Strong authentication measures can help keep accounts secure by requiring two or more methods of identification and verification—muti-factor authentication—before allowing access to information or systems.


Why Digital Projects Need Transparency and Accountability

As a CIO, it is easy to underestimate the time it will take to build forward. In the public sector, this takes longer due to inherent risk aversion. In my first few months at DWP, I felt I was making a difference, but after the first few months, the size of the prize began to take its toll and the risk factors of going forward began to set in. As CIOs, it is our role to persuade, influence and keep in mind where we are trying to get to. We landed that vision with the senior team but DWP's size and geographic spread made it harder to get the spokes of the business to hear the same story and grasp the same benefits. If I had my time again, I would spend more time with the business, less at the center and try to build momentum that was unstoppable. As I completed my first 100 days in the CIO role at Segro, one of the key takeaways from DWP was making sure the digital leadership team knew how to act together. In my new role, I am able to replicate that at a faster pace. Brand identity matters. At Segro, we are not known as the digital team, and I am striving to change that. The organization will benefit from unifying its understanding of technology, transformation and data. 


Navigating Europe’s AI Code of Practice Before the Clock Runs Out

The Code of Practice for general-purpose AI demonstrates a sincere effort to get the details right. Yet, in a rush to cover every contingency, it risks overlooking the bigger picture: spurring the next generation of AI-driven breakthroughs that can speed up drug discovery, modernize public services, and let small farmers use new predictive tools for planting and harvesting. Innovation is a delicate process, especially in emerging areas like large-scale language models or real-time climate analytics. Europe possesses the scientific expertise and market size to shape a future where these tools become transformative assets in every corner of the continent. But that future hinges on how carefully policymakers, industry players, and civil society calibrate the rules. ... Europe’s AI revolution will not happen on autopilot. Real progress demands revamping processes, investing in talent, and scaling up what works. The public sector must also move faster if Europe is to modernize healthcare, education, and core government services. Tangled or rigid rules risk derailing Europe’s ambitions. Europe’s digital regulations already weigh heavily on businesses. Over the past 25 years, the number of economy-wide laws doubled, and the EU has rolled out close to 100 tech-focused laws. High-minded ideals often mix with fragmented enforcement and overlapping rules.


Seven Common Reasons Why Data Science Projects Fail

Large organizations may own hundreds of data assets spread across sprawling, multi-faceted IT infrastructures. Unless they have a detailed, continuously updated data catalog in place that tracks all of those assets – which many don’t – simply finding the data that the team needs to complete a project can present a major challenge. Here again, however, tools and techniques are available that can help. The major solution is data discovery software, which can automatically identify data resources, including those that are not documented. ... Too often, businesses decide that they want to do something with their data, but they don’t know exactly what. For example, they might establish a high-level goal like using data-derived insights to grow revenue, without determining exactly which types of revenue-related challenges they want to solve with help from data. Avoiding this pitfall is simple: You need to articulate precise deliverables and outcomes at the start of your project. There’s always room to adjust the details a bit once a project is underway, but you should know from the beginning what the overarching outcomes of the project should be. ... A final key challenge that can thwart data science project success is the failure to understand what the goals of data science are, and which methodologies and resources data science requires.


What’s changing the rules of enterprise AI adoption for IT leaders

As model costs fall and the value from AI migrates up to the application layer, enterprises are going to have even greater choice in business solutions, either from third parties or those developed inhouse. For CIOs with access to the right resources, building applications internally is now a more realistic proposition. This becomes increasingly attractive in the context of complex business processes that may be unique to enterprises. As the costs of running models fall to near zero, the ROI equation shifts dramatically. According to Forrester Research, the ability to run hyper-efficient models like DeepSeek locally on PCs opens up a new era of edge intelligence, which businesses can deploy across organizations. “The real value in AI isn’t just in building bigger models, but innovating on top of them and in implementing them efficiently,” says Devesh Mishra, president of CoreAI at digital transformation specialists Keystone. “Companies that pair foundation model advancements with deep business and operational expertise will lead the next phase of AI-driven ROI.” This deep understanding of industry verticals and their specific issues and needs will define success for many vendors as they increasingly compete with inhouse development teams. 


Rowing in the Same Direction: 6 Tips for Stronger IT and Security Collaboration

Due to market dominance, many software vendors focus on Windows, but IT fleets today include a mix of Chromebooks, Linux systems and Apple devices. Security and IT teams must recognize that the weakest endpoint determines the overall defense posture. By ensuring IT and security teams are aligned on what’s in the environment, you can break down silos and work together toward shared security goals, such as zero-trust implementation. ... Security and IT teams should collaborate to ensure policies protect the overall business mission, not just the bottom line. For example, if security requires an agent to collect telemetry for advanced analysis (e.g., CrowdStrike, Halcyon, etc.), what’s the performance impact on endpoints? If the agent is running AI/ML workloads, how is it optimized for performance on XPU and non-XPU systems? IT fleet leaders care about security BUT they also demand top performance and battery life from devices. Both security and IT teams together can align solutions that offer best-in-class security without degrading fleet performance. ... Ownership in IT and security is one of the hardest challenges to solve. In many cases, responsibility over cloud workloads, applications and ephemeral systems isn’t always clearly defined.