Daily Tech Digest - February 20, 2025


Quote for the day:

"Increasingly, management's role is not to organize work, but to direct passion and purpose." -- Greg Satell


The Business Case for Network Tokenization in Payment Ecosystems

Network tokenization replaces sensitive Primary Account Numbers with tokens, rendering stolen data useless to fraudsters and addressing a major area of fraud: online payments. "Fraud rates are seven times higher online than in physical stores, as criminals exploit exposed card numbers," Mastercard's chief digital officer Pablo Fourez told Information Security Media Group. Shifting to tokenization protects businesses from financial losses and safeguards reputation and customer trust. ... But adoption of network tokenization does come with challenges including issuer readiness, regulatory hurdles and inconsistent implementations. Integrating network tokenization across multiple card networks requires multiple integrations, ensuring interoperability and maintaining high security standards, Fourez said. Compliance with varying regulatory requirements and achieving scalability without performance issues can be resource-intensive, he said. Ramakrishnan points to delays in token provisioning that may slow the speed of transactions if the technology is not scalable. Situations in which one entity in the payment ecosystem does not use network tokens can be major failure points that can lead to transaction failure and cart abandonment.


The hidden gap in cyber recovery: What happens when roles and processes are overlooked

There’s a big difference between disaster recovery (DR) and cyber recovery. For DR, infrastructure and backup teams are the central players and an organization can be up and running in no time. Cyber recovery, however, involves the entire business — backup teams, network teams, cloud personnel, incident response teams from security, teams that are validating the active directory before restores, as well as the application owners and business owners that depend on those functions. ... “There are bigger questions that you only get to by testing your process,” Grantham says. “Whatever your business is, it’s about looking at that data and saying, how do I provide access in this modified environment? For every one of the applications supporting that, having a run book to say, this is the people, the process, linked to the technology to get me to a user in the system performing their daily function because they need to be able to do their job. That run book gets them there. If your data is just sitting on a hard drive in the middle of a data center, how does that help your business?” ... “The idea that cyber recovery strategies require continual evolution, just like zero trust is an evolution of different identity standards, is not something that a lot of businesses have accepted yet,” Grantham says. 


Microsoft Makes Quantum Computing Breakthrough With New Chip

While it’s been working on its own quantum computing hardware, Microsoft has also been building out a quantum computing stack, with its Q# development language and quantum algorithms that can run on the quantum hardware from IonQ, Pasqal, Quantinuum, QCI, and Rigetti that’s available through Azure — but the most powerful systems so far are still in the 20-30 qubit range. ... A prototype fault-tolerant quantum computer will be available “in years, not decades,” promised Chetan Nayak, Microsoft’s VP of quantum hardware. The potential of topological qubits is why DARPA announced earlier this month that Microsoft is one the first two companies to be invited to join its rigorous program for investigating whether it’s possible to build a useful quantum computer — where the value of the computing it can do is worth more than what it costs to build and run — by 2033, using what the agency calls underexplored systems. ... Initially, there are just eight physical qubits in the Majorana 1 QPU, which Microsoft can assign in different ways to get the number of logical qubits it wants. Calling it a QPU is a reminder that there will probably be a lot of different kinds of quantum computer, and that researchers will pick the one that suits them — like choosing a different GPU for a specific workload.


CISO Conversations: Kevin Winter at Deloitte and Richard Marcus at AuditBoard

A CISO can only be as good as the security team. Assembling a strong team requires good selection and effective management: that is, who do you recruit, and how do you maintain top efficiency? Recruitment is a balance between multiple individual rock stars and a single cohesive team. That’s a personal choice for each CISO, but usually involves a compromise: the best possible individuals with the widest possible range of diversity that will still make a single team. Having recruited the team, the CISO must help them excel both as individuals and one team. “I love the Japanese concept of ‘ikigai’,” said Marcus. Ikigai can be defined as finding your life’s purpose – the meeting point of personal passion, skills, mission, and vocation. “I think you need to deliver an experience for the security team that checks all these boxes. They need to have interesting problems. They need to be using modern technology with some autonomy over what they use. You need to provide a sense of purpose – that what they’re doing is not just about the immediate technical work, but will have a broader impact on the company, the industry, and the world at large. And of course, you must pay them what they’re worth. I think if you do all these things, you’ll have a very happy and motivated and engaged team.”


Will AI destroy human creativity? No - and here's why

Today's AI models do more than automate. They engage. They understand user input conversationally, simulate thought processes, and adapt to preferences. AI's ability to adapt comes from machine learning constantly improving by analyzing huge amounts of data. This has made AI smarter and easier for people and businesses to use. The impact is undeniable in creative industries as AI tools can design logos, generate intricate artwork, and write compelling narratives, offering creators new possibilities. These advancements are transforming how people work, create, and innovate. Generative AI is now the focus of business strategies, with companies using these technologies to enhance efficiency and engage with their audiences in new ways. ... That said, the role of human creativity isn't being erased; it's evolving. Perhaps the designers and writers of tomorrow aren't disappearing but transforming into prompt engineers and crafting ideas in collaboration with these tools, mastering a new kind of artistry. Let's face it: Just because AI creates something doesn't mean it's good. The ability to discern, curate, and refine that intangible "eye" for greatness will always remain profoundly human. Unless, of course, Skynet becomes a reality.


Unknown and unsecured: The risks of poor asset visibility

Asset visibility remains a critical issue because organizations often lack a real-time, unified view of their IT, OT, and cloud environments. Shadow IT, unmanaged endpoints, remote work and third-party integrations create blind spot which increases attack vectors. Without complete visibility, security teams struggle to detect and respond to threats effectively, leaving organizations vulnerable to breaches and compromises. Good visibility across enterprise assets is no longer just a nice to have, it’s a necessity to survive in the digital world. ... Improving visibility of digital assets is critical for all organizations, otherwise, blind spots will exist in networks which criminals can exploit. Organizations must treat every endpoint as a potential entry point, ensuring it is seen and secured. It’s also important to remember that perfect technology doesn’t exist, vulnerabilities will always surface in products, so organizations must not only have an inventory of their assets, but also the ability to apply patches and security updates automatically, without necessarily having to pull all systems down. Improving OT visibility requires a specialised approach due to the sensitive nature of legacy and ICS systems.


Hacking Cybersecurity Leadership

Cybersecurity culture often fosters a sense of individualism that lends itself to operating in isolation—individual interest in areas of cybersecurity lead to individually-driven projects, individual certifications, etc. That being said, being siloed is not a sustainable mode of operation. For most cyber professionals, the challenges are too complex to resolve individually and negative experiences (failure, shame, guilt, embarrassment, etc.), when experienced alone, are likely to take an even greater toll than when those experiences are shared with others. ... In order to boost a sense of competence at the individual level, leaders need to create a learning-oriented environment that provides opportunities for individuals to explore, gather, and practice applying new information. There are specific strategies to build or strengthen these aspects of the work environment. ... Leaders can also embrace a growth-mindset culture whereby mistakes do not equate to failures; rather, mistakes are repositioned as learning opportunities to develop and grow. This allows individuals to safely explore and practice various aspects of their work. It’s important to note that this approach also requires a shift toward more developmental, rather than punitive or evaluative, feedback.


Real-World AppSec Priorities Observed in BSIMM15

Many organizations are still in the nascent stages of defining AI-specific attack surfaces and integrating security mechanisms. To stay ahead of these emerging risks, organizations should proactively gather intelligence on AI-related threats, establish secure design patterns for AI models, and ensure that AI security is seamlessly integrated into existing policies and frameworks. Proactivity is key here — a well-rounded strategy to leverage the potential AI can offer must be accompanied by strategic approaches to counter risks and threats it introduces. The use of adversarial testing, which involves simulating potential attacks to identify vulnerabilities, has more than doubled over the past year. This trend indicates a growing recognition among companies of the importance of continuously testing AI models to prevent them from being exploited by malicious actors. While it is not yet possible to definitively attribute the rise in these BSIMM activities to AI-specific concerns, it is evident that these practices will play a crucial role in addressing the emerging risks associated with AI. ... The decline does raise a red flag around the preparedness of organizations to defend against the evolving threat landscape. It also illustrates a need for security education and awareness initiatives. 


Why Best-of-Breed Security Is Non-Negotiable for SIEM

With cyber threats evolving at an unprecedented pace, security leaders can no longer afford to treat SIEM as just another layer in a bloated security stack. Instead, they must take a strategic approach, ensuring that their SIEM leverages truly best-of-breed security—one that enhances integration, streamlines operations, and delivers actionable threat intelligence. So, is more always better? Or is it time to redefine what best-of-breed really means for SIEM? ... The appeal of best-of-breed security is clear: superior threat detection, deeper visibility, and greater flexibility to adapt to evolving threats. However, this approach also introduces complexity. Managing multiple vendors, ensuring seamless integration, and avoiding operational inefficiencies can quickly become overwhelming. So, how do security leaders strike the right balance? Success lies in strategic selection, integration, and optimization—choosing tools that complement each other and enhance Security Information and Event Management (SIEM) rather than adding more noise. Adopting a best-of-breed security approach within a SIEM framework offers several advantages. By integrating specialized security solutions, organizations can optimize threat detection, improve agility, and reduce reliance on a single vendor. 


Digital twins and transitioning to a greener, safer industrial sector

Shah finds the term digital twins is often misunderstood. “Digital twins are not a single technology and standalone solution, but a strategic framework – one that combines and leverages multiple technologies. This can include AI, reality capture, 3D reality models and advanced web technologies which create a virtual 3D replica of an industrial site and its facilities.” Aiming to be the first climate-neutral continent by 2050, Europe has set some aspirational goals and according to Shah, digital twins could be a real game-changer in how the world could future-proof its industrial sites and transition to net zero. ... She noted many industrial sites struggle with issues related to technical documents and on the ground conditions, and this is an issue because inaccurate information can cause accidents to occur. AI and 3D rendered models enable experts to envision a scene in real time, allowing for greater accuracy than is often permitted by a physical walk-through of a facility. “What’s more, site personnel can also simulate processes like ‘lockout tagout’ safely, where machines are isolated and shut down for maintenance, without real-world risks and predict what could go wrong if an asset was isolated incorrectly, for example.

Daily Tech Digest - February 19, 2025


Quote for the day:

"Go confidently in the direction of your dreams. Live the life you have imagined." -– Henry David Thoreau


Why Observability Needs To Go Headless

Not all logs have long-term value, but that’s one of the advantages of headless observability and decoupled storage. Teams have the freedom and flexibility to determine which logs should be retained for longer periods. Web application firewall (WAF) and other security logs can be retained over the long term and made available to cybersecurity teams and threat hunters. Other application logs can provide long-term insights into how resources are being used for capacity planning and anomaly detection. Let’s take a closer look at a real, tangible use case where observability data can be valuable for other teams: real user monitoring (RUM). In the realm of observability, RUM allows teams to proactively monitor how end users are experiencing web applications. Issues like slow page loads can be mitigated before they frustrate users. Beyond observability, RUM data can also provide insights into how your end users are interacting with your brand and your products. This data is invaluable for marketing, advertising and leadership teams that need to plan strategy. ... As a real-world example, many enterprises use CDN log data for real user monitoring. In the short term, monitoring CDNs is important for ensuring good user experiences and fast loading times of digital assets. However, being able to retain huge volumes of log data long term and cost-effectively provides certain advantages to enterprises.


Why the CIO role should be split in two

The fact is that within enterprises, existing architecture is overly complex, often including new digital systems interconnected with legacy systems. This ‘hybrid’ architecture is a combination of best and bad practice. When there is an outage, the new digital platforms can invariably be restored to recover business process support. But because they do not operate in isolation, instead connecting with legacy technologies, business operations themselves may not fully recover if the legacy systems continue to be impacted by the outage. For most enterprises stuck in this hybrid state, the way forward is to be more discipline around architecture. ... Simplifying architecture at an enterprise level is something the CIO and CISO should work together concurrently as a shared goal. The benefits of doing so will accrue over time rather than immediately, hence there can be some reluctance to prioritize. ... What does all this have to do with my opening discussion about the CIO and complementary IT executive roles? Splitting the CIO role into smaller and smaller pieces would be okay if doing so led to better outcomes. But I would argue that examples like the ones above show that the multiple-exec approach is not a success story we should be bragging about. In this structure, the two CIOs would share ownership of the IT strategy. 


Generative AI vs. the software developer

AI is not going to turn your customer support people (Elvis bless them) into senior software developers. A customer support person might be able to think “I need to track the connection between items in inventory, the customer’s shopping cart, and the discount pricing for a given item,” but unless that person also knows how to code, they will have a seriously hard time instructing an AI model to generate the code they need. Most likely, they aren’t going to know if the code the AI produces even runs, let alone works correctly. But AI can help actual developers in many ways. It can look at existing code you have written and help you produce the next thing that you need to write. It can even write large routines and classes that you ask it to. But it is not going to create the things you need without you having a large say in what that is. You need to know how to craft a prompt to get precisely what is needed. ... Now, that prompt will be pretty effective in getting what is asked for. But the trick here, obviously, is that you have to know what a React component is, what Tailwind is, the fact that you want tests, what TypeScript is, what null is, and that you’d even need to handle missing values. There is a lot of knowledge and experience wrapped up in that prompt, and it’s not something that an inexperienced developer, or certainly a non-developer, would be able to write.


Beyond the Screen: Humanising Digital Learning

Digital learning holds a lot of promise, aiming to bring the most dynamic and engaging elements of in-person training into the digital space. Interactive tools like quizzes, breakout rooms, and mini-tasks demonstrate just how far we’ve come in replicating real-world engagement online. However, we continue to see issues with retention and follow through. Recent research shows that 66% of employees still find on-the-job learning to be more effective than formal online courses. This disconnect often stems from a lack of deep, meaningful engagement. Without it, employees are less likely to retain knowledge or apply their skills effectively in the workplace. This is particularly crucial when it comes to human skills—broader soft skills like communication, emotional intelligence, and critical thinking. Unlike technical skills that are typically learned ‘by the book’, softer skills are learned and applied every day. The solution lies in moving beyond passive consumption to real-world, interactive learning simulations. ... The shift to digital learning offers incredible potential, but realising that potential requires a thoughtful approach. By embracing AI-powered technologies and prioritising interactive, personalised and bite-sized content, organisations can create learning experiences that are engaging, practical and transformative.


Shadow AI: How unapproved AI apps are compromising security, and what you can do about it

Shadow AI introduces significant risks, including accidental data breaches, compliance violations and reputational damage. It’s the digital steroid that allows those using it to get more detailed work done in less time, often beating deadlines. Entire departments have shadow AI apps they use to squeeze more productivity into fewer hours. “I see this every week,” Vineet Arora, CTO at WinWire, recently told VentureBeat. “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.” ... “If you paste source code or financial data, it effectively lives inside that model,” Golan warned. Arora and Golan find companies training public models defaulting to using shadow AI apps for a wide variety of complex tasks. Once proprietary data gets into a public-domain model, more significant challenges begin for any organization. It’s especially challenging for publicly held organizations that often have significant compliance and regulatory requirements. Golan pointed to the coming EU AI Act, which “could dwarf even the GDPR in fines,” and warns that regulated sectors in the U.S. risk penalties if private data flows into unapproved AI tools. There’s also the risk of runtime vulnerabilities and prompt injection attacks that traditional endpoint security and data loss prevention (DLP) systems and platforms aren’t designed to detect and stop.


Think being CISO of a cybersecurity vendor is easy? Think again

When people in this industry hear that a CISO is working at a cybersecurity vendor, it can trigger a number of assumptions — many of them misguided. There’s a stereotype that the role isn’t “real” CISO work, that it’s more akin to being a field CISO, someone primarily outward-facing and focused on supporting sales or amplifying the brand. The assumption goes something like this: How hard can it be to secure a security company, and isn’t the “real” work done at companies outside of this bubble? ... Some might think that working at a security company limits your perspective of what’s out there in the broader industry, but I found the opposite to be true. I gained a deeper understanding of how organizations evaluate security solutions and what they truly care about. I saw firsthand the challenges customers faced when implementing security tools, and that experience gave me empathy, insight, and a renewed ability to speak their language. Now that I’m back in industry, I’m bringing that perspective with me. The transition wasn’t a step “down” or a shift away from anything; it was just the next phase in my career. Security leadership is security leadership, no matter where you practice it. The challenges remain complex, the responsibilities remain vast, and the importance of aligning security with business outcomes remains paramount.


Lack of regulations, oversight in health care IT can cause harm

Increasingly, health care organizations have outsourced their health IT infrastructure to companies owned and operated by private equity, venture capital and Big Tech firms that view them as platforms to experiment with unproven AI and machine-learning tools. "The unregulated integration of AI tools into these systems will make it even harder to protect patients' rights," Appelbaum said. "Moreover, because these records contain so much information and are centralized, they are among the most lucrative targets for cyberattacks and hackers," Batt said, noting that in 2024, data breaches exposed the health records of more than 200 million Americans. As a result, health care organizations must now invest billions more in cybersecurity systems owned and operated by venture capital, private equity and Big Tech. The authors argue that the federal government is once again behind in setting safeguards for the adoption of new health IT, and that the lessons from 30 years of attempts to set adequate standards for information-sharing in electronic health systems—as detailed in these reports—should spur regulators to act quickly and rein in unregulated financial activities in health IT. Batt explained, "The history of the health IT implementation and the lack of sufficient regulatory oversight and enforcement of standards should give us great pause for the current enthusiasm over the adoption of AI and machine learning in health information systems."


The Future of Data: How Decision Intelligence is Revolutionizing Data

Decision Intelligence is an interdisciplinary field that uses AI to enhance all aspects of decision-making across all areas of a Business. It blends concepts of Data Science (statistics, machine learning, AI, analytics) with Behavioral Sciences (psychology, neuroscience, economics, and managerial sciences) to understand how decisions are made and how outcomes are measured. ... Decision Intelligence (DI) can be considered a subset where it uses AI to build a reliable data foundation by collecting, organizing, and connecting data and then applying AI and analytics to turn that data into useful insights for better decision-making. In short, while AI provides the technology to mimic human intelligence, DI focuses on applying that technology to improve how decisions are made. ... You can use any of your machine learning models, like regression models, classification models, time series forecasting models, clustering algorithms, or reinforcement learning for implementing Decision Intelligence. These machine learning will help identify patterns in the data and make predictions based on those patterns, but decision intelligence will take that information one step further by incorporating it into a broader framework that can actively guide the decision-making process by considering the predictions and the potential outcomes and consequences of different choices.


ManpowerGroup exec explains how to manage an AI workforce

It’s not just a technology anymore. We are looking for individuals that have the industry experience. We can take somebody with industry experience and train them on the technical part of the job. “It’s a lot harder for us to take somebody with the technical skills and teach them how the industry works. I think there’s a focus on looking at the soft skills: the problem solving, the complex reasoning ability, and communications. Because it’s not just developing AI for the sake of software technology; it’s to address that larger business problem. It’s about looking at all of the business functions, and taking all of that into consideration. ... The problem is [that] the gap is getting wider between those employees who understand AI technology and are willing to learn more about it and those who don’t want to have anything to do with it. But I think everybody will be a technologist, eventually. It’s going to be talent augmented by technology. ... “There are so many things, and it’s happening so fast. So, we are still learning as fast as we can. We’re trying to understand what the impact of AI will be, and how it will change our business models. Even from a talent organization like ours, which is providing global talent solutions, what does that do for us? Now, our company is going to start looking for your talent plus the AI agents you’ll need. So AI becomes part of a hiring solution. 


Debunking the AI Hype: Inside Real Hacker Tactics

While headlines are trumpeting AI as the one-size-fits-all new secret weapon for cybercriminals, the statistics—again, so far—are telling a very different story. In fact, after poring over the data, Picus Labs found no meaningful upswing in AI-based tactics in 2024. Yes, adversaries have started incorporating AI for efficiency gains, such as crafting more credible phishing emails or creating/ debugging malicious code, but they haven't yet tapped AI's transformational power in the vast majority of their attacks so far. In fact, the data from the Red Report 2025 shows that you can still thwart the majority of attacks by focusing on tried-and-true TTPs. ... Attackers are increasingly targeting password stores, browser-stored credentials, and cached logins, leveraging stolen keys to escalate privileges and spread within networks. This threefold jump underscores the urgent need for ongoing and robust credential management combined with proactive threat detection. Modern infostealer malware orchestrates multi-stage style heists blending stealth, automation, and persistence. With legitimate processes cloaking malicious operations and actual day-to-day network traffic hiding nefarious data uploads, bad actors can exfiltrate data right under your security team's proverbial nose, no Hollywood-style "smash-and-grab" needed. Think of it as the digital equivalent of a perfectly choreographed burglary. 

Daily Tech Digest - February 18, 2025


Quote for the day:

"Everything you’ve ever wanted is on the other side of fear." -- George Addair


AI Agents Are About To Blow Up the Business Process Layer

While AI agents are built to do specific tasks or automate specific, often-repetitive tasks (like updating your calendar), they generally require human input. Agentic AI is all about autonomy (think self-driving cars), employing a system of agents to constantly adapt to dynamic environments and independently create, execute and optimize results. When agentic AI is applied to business process workflows, it can replace fragile, static business processes with dynamic, context-aware automation systems. Let’s take a look at why integrating AI agents into enterprise architectures marks a transformative leap in the way organizations approach automation and business processes, and what kind of platform is required to support these systems of automation. ... Models that power networks of agents are essentially stateless functions that take context as an input and output a response, so some kind of framework is necessary to orchestrate them. Part of that orchestration could be simple refinements (for example, having the model request more information). This might sound analogous to retrieval-augmented generation (RAG) — and it should, because RAG is essentially a simplified form of agent architecture: It provides the model with a single tool that accesses additional information, often from a vector database.


The risks of autonomous AI in machine-to-machine interactions

Adversarial AI attacks, such as model poisoning and data manipulation, threaten M2M security by compromising automated authentication and processes. These attacks exploit vulnerabilities in how machine learning models exchange data and authenticate within M2M environments. Model poisoning involves injecting malicious data or manipulating updates, undermining AI decision-making and potentially introducing backdoors. If AI systems accept compromised credentials or updates, security degrades, particularly in autonomous M2M systems, leading to cascading failures. ... The key is implementing zero standing privileges (ZSP) to prevent AI-driven systems from having persistent, unnecessary access to sensitive resources. Instead of long-lived credentials, access is granted just-in-time (JIT) with just-enough privileges, based on real-time verification. ZSP minimizes risk by enforcing ephemeral credentials, policy-based access control, continuous authorization, and automated revocation if anomalies are detected. This ensures that even if an AI system is compromised, attackers can’t exploit standing privileges to move laterally. With AI making autonomous decisions, security must be dynamic. By eliminating unnecessary privileges and enforcing strict, real-time access controls, organizations can secure AI-driven machine-to-machine interactions while maintaining agility and automation.


Password managers under increasing threat as infostealers triple and adapt

Attacks against credential stories are rising partly because these attacks have become easier and more automated, with widely available tools enabling cybercriminals to extract and exploit credentials at scale. In addition, “many businesses still rely on passwords as their primary defense, despite the known security risks, due to challenges around MFA [multi-factor authentication] adoption and user friction,” Berzinski said. David Sancho, senior threat researcher at anti-malware vendor Trend Micro, told CSO that the increase in malware targeting credential stores is unsurprising. “We are definitely seeing a rise in malware targeting credential stores, but this is hardly a surprise to anybody,” Sancho said. “Credential stores are where credentials are located, specifically on the browser. Every time you let the browser ‘memorize’ a user/password pair, it gets stored somewhere. Those locations are certainly the prime targets — and have been for a long time — for infostealers.” Darren Guccione, CEO and co-founder of password manager vendor Keeper Security, acknowledged that cybercriminals were targeting credential stores but argued that some applications were better protected than others. “Not all password managers are created equal, and that distinction is critical as cybercriminals increasingly target a broad range of cybersecurity solutions, including credential stores,” Guccione said. 


What role does LLM reasoning play for software tasks?

Reasoning models like o1 and R1 work in two steps, first they “reason” or “think” about the user’s prompt, then they return a final result in a second step. In the reasoning step, the model goes through a chain of thought to come to a conclusion. It depends on the user interface in front of the model if you can fully see the contents of this reasoning step. OpenAI e.g. is only showing users summaries of each step. DeepSeek’s platform shows the full reasoning chain (and of course you also have access to the full chain when you run R1 yourself). At the end of the reasoning step the chatbot UIs will show messages like “Thought for 36 seconds”, or “Reasoned for 6 seconds”. However long it takes, and regardless of if the user can see it or not, tokens are being generated in the background, because LLMs think through token generation. ... Many of the reasoning benchmarks use grade school math problems, so those are my frame of reference when I try to find analogous problems in software where a chain of thought would be helpful. It seems to me like this is about problems that need multiple steps to come to a solution, where each step depends on the output of the previous one. ... Debugging seems like an excellent use case for chain of thought. My main puzzle is how much our usage of reasoning for debugging will be hindered by the lack of function calling.


How to keep AI hallucinations out of your code

The consequences of flawed AI code can be significant. Security holes and compliance issues are top of mind for many software companies, but some issues are less immediately obvious. Faulty AI-generated code adds to overall technical debt, and it can detract from the efficiency code assistants are intended to boost. “Hallucinated code often leads to inefficient designs or hacks that require rework, increasing long-term maintenance costs,” says Microsoft’s Ramaswamy. Fortunately, the developers we spoke with had plenty of advice about how to ensure AI-generated code is correct and secure. There were two categories of tips: how to minimize the chance of code hallucinations, and how to catch hallucinations after the fact. ... Even with machine assistance, most people we spoke to saw human beings as the last line of defense against AI hallucination. Most saw human involvement remaining crucial to the coding process for the foreseeable future. ” Always use AI as a guide, not a source of truth,” says Microsoft’s Ramaswamy. “Treat AI-generated code as a suggestion, not a replacement for human expertise.” That expertise shouldn’t just be around programming generally; you should stay intimately acquainted with the code that powers your applications. “It can sometimes be hard to spot a hallucination if you’re unfamiliar with a codebase,” says Rehl. 


Open source LLMs hit Europe’s digital sovereignty roadmap

The project’s top-line goal, as per its tagline, is to create: “A series of foundation models for transparent AI in Europe.” Additionally, these models should preserve the “linguistic and cultural diversity” of all EU languages — current and future. What this translates to in terms of deliverables is still being ironed out, but it will likely mean a core multilingual LLM designed for general-purpose tasks where accuracy is paramount. And then also smaller “quantized” versions, perhaps for edge applications where efficiency and speed are more important. “This is something we still have to make a detailed plan about,” Hajič said. “We want to have it as small but as high-quality as possible. We don’t want to release something which is half-baked, because from the European point-of-view this is high-stakes, with lots of money coming from the European Commission — public money.” While the goal is to make the model as proficient as possible in all languages, attaining equality across the board could also be challenging. “That is the goal, but how successful we can be with languages with scarce digital resources is the question,” Hajič said. “But that’s also why we want to have true benchmarks for these languages, and not to be swayed toward benchmarks which are perhaps not representative of the languages and the culture behind them.“


How to Create a Sound Data Governance Strategy

Governance isn’t a project with an end date. It’s an ongoing hygiene exercise that requires continuous attention and focus,” says Ennamli. “You don’t have to build an army if you did the initial work right, just a diverse team of experts that understand the business dynamics and have foundational data knowledge.” McKesson’s Thirunagalingam warns that it’s also possible to imagine starting from the wrong end, having ignored the needs of certain key stakeholders until late in the game. The result of that is resistance to the adoption of solution and misaligned policies for the governance of the business with its operational requirements. ... “Do a bit and then build up. Make things simple at first [to] quickly deliver business value, such as increasing data accuracy or [enabling] more effective compliance,” says Thirunagalingam. “Promote accountability by embedding governance into business outcomes and encouraging ownership of data stewardship to all employees. BSI Americas’s Barlow says some organizations don’t understand how much data they possess, which can hamper the implementation of an effective data management program. Similarly, they may not fully grasp what regulations they must comply with or what data is specifically collected. 


Boost Your Website Core Web Vitals Through DevOps Best Practices

Integrating automation and performance testing is essential for making Core Web Vitals SEO a natural part of the DevOps workflow. This includes the implementation of automated performance tests in the CI/CD pipeline after each code change to detect issues early on. CI/CD pipelines enable rapid testing and deployment with performance checks. Load testing enables the replication of high-traffic conditions, uncovering bottlenecks and ensuring the site can scale for spikes. Similarly, performance budgeting, with goals for metrics such as page speed, allows teams to set automated tests and avoid degradation. A/B testing enables teams to test new features side-by-side, seeing how they affect Core Web Vitals before deployment. With these automated flows, teams reliably deliver quality code, ensuring performance is always a consideration and never an afterthought. ... Collaboration among DevOps, developers and SEO experts is required to optimize Core Web Vitals. All have their own set of skills, and if all of them collaborate, they can make a decent plan: DevOps and Developers: Developers construct the site, and DevOps ensures its proper deployment. Communicating frequently is the secret to catching performance problems and making sure new code doesn’t slow down the site. 


Mastering Kubernetes in the Cloud: A Guide to Cloud Controller Manager

The main benefit of Cloud Controller Manager is that it offers a simple way for Kubernetes to interact with cloud provider APIs without requiring any special configuration or code implementation on the part of Kubernetes users. Cluster admins can simply choose which cloud they need to integrate with, then enable the appropriate Cloud Controller Manager. In addition, from the perspective of the Kubernetes project, Cloud Controller Manager is advantageous because it separates cloud-specific compatibility logic into a distinct component. Rather than building support for each cloud platform's APIs directly into the Kubernetes control plane, Cloud Controller Manager uses a plugin architecture that allows the various cloud providers to write the logic necessary for Kubernetes to integrate with their APIs, then make it available to Kubernetes users as a component that the users can optionally enable. This approach makes it easy for cloud providers to update the compatibility layer as needed in order to keep it in sync with their APIs. ... If you're running Kubernetes on bare-metal servers that you are managing yourself, Cloud Controller Manager is not necessary because Kubernetes can interact with nodes and other resources directly, without having to use special APIs.


A cohesive & data-centric culture is essential for businesses to thrive in the AI-driven world

The cohesive and data-centric culture emerged as it was essential for businesses to thrive in this AI-dominating world so as to make smarter, faster decisions. Accurate, accessible, and well-managed data across the organisation often qualifies the organisation to step away from the guesswork and base their decisions on reliable information. Moreover, data-driven culture has always contributed to a more strategic approach to business challenges. Additionally, AI-powered solutions take this game to a further extent by providing real-time insights, predictive analytics, and automation, which means it allows companies to speedily analyse massive data amounts, reveal hidden patterns, and predict trends, thus acting proactively instead of reactively. For instance, studies have found that AI can improve forecast accuracy in the retail sector by reducing errors up to 50%. It was also noticed that artificial intelligence could uplift the financial sector by 38% in 10 years time. In the same way, some reports have been released predicting that AI could help the healthcare sector save $150 billion annually by becoming more efficient and making better decisions. These are illustrations of the advanced data culture that AI provides which helps businesses to be proactive and make decisions based on facts.

Daily Tech Digest - February 17, 2025


Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis


Like it or not, AI is learning how to influence you

We need to consider the psychological impact that will occur when we humans start to believe that the AI agents giving us advice are smarter than us on nearly every front. When AI achieves a perceived state of “cognitive supremacy” with respect to the average person, it will likely cause us to blindly accept its guidance rather than using our own critical thinking. This deference to a perceived superior intelligence (whether truly superior or not) will make agent manipulation that much easier to deploy. I am not a fan of overly aggressive regulation, but we need smart, narrow restrictions on AI to avoid superhuman manipulation by conversational agents. Without protections, these agents will convince us to buy things we don’t need, believe things that are untrue and accept things that are not in our best interest. It’s easy to tell yourself you won’t be susceptible, but with AI optimizing every word they say to us, it is likely we will all be outmatched. One solution is to ban AI agents from establishing feedback loops in which they optimize their persuasiveness by analyzing our reactions and repeatedly adjusting their tactics. In addition, AI agents should be required to inform you of their objectives. If their goal is to convince you to buy a car, vote for a politician or pressure your family doctor for a new medication, those objectives should be stated up front.


Leveraging AI for Business Continuity and Disaster Recovery in the Work-From-Home Era

AI-driven tools can monitor the health and performance of hardware and predict hardware failure before it happens using anomaly detection algorithms. For example, if a hard drive is starting to fail or there’s unusual network activity, AI systems can flag the activity/potential problem early and send an email to alert the WFH user or corporate IT staff, allowing businesses to take preventative action. ... AI can detect anomalies in network traffic or access patterns which may indicate a cyberattack (e.g., ransomware, phishing, or data breach). AI-powered cybersecurity tools, such as intrusion detection systems (IDS) and endpoint protection software, can respond automatically to threats by isolating affected systems or rolling back malicious changes. ... Small businesses may not have reliable or frequent data backups or rely on manual processes (e.g., external hard drives) that aren’t automated or secure. It may be difficult to recover without a proper backup strategy if critical data is lost due to hardware failure, cyber-attacks, or natural disasters. ... AI-assisted BC and DR solutions offer a range of benefits, particularly for SOHO and WFH users. These offerings are becoming essential as businesses of all sizes seek to maintain operational resilience in an ever-changing technological landscape. 


GenAI can make us dumber — even while boosting efficiency

“A key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the study found. Overall, workers’ confidence in genAI’s abilities correlates with less effort in critical thinking. The focus of critical thinking shifts from gathering information to verifying it, from problem-solving to integrating AI responses, and from executing tasks to overseeing them. The study suggests that genAI tools should be designed to better support critical thinking by addressing workers’ awareness, motivation, and ability barriers. ... As Agentic AI becomes common, people may come to rely on it for problem-solving — but how will we know it’s doing things correctly, Gold said. People might accept its results without questioning, potentially limiting their own skills development by allowing technology to handle tasks. Lev Tankelevitch, a senior researcher with Microsoft Research, said not all genAI use is bad. He said there’s clear evidence in education that it can enhance critical thinking and learning outcomes. 


How to harness APIs and AI for intelligent automation

APIs are the steady bridges connecting diverse systems and data sources. This reliable technology, which emerged in the 1960s and matured during the noughties ecommerce boom, is bridging today’s next-gen technologies. APIs allow data transfer to be automated, which is essential for training AI models efficiently. Rather than building complex integrations from scratch, they standardize data flow to ensure the data that feeds AI models is accurate and reliable. ... Data preprocessing is the critical step before training any AI model. APIs can ensure that AI applications and models only receive preprocessed data. This minimizes manual errors which smoothes the AI training pipeline. With a direct interface to standardized data, developers can focus on refining the model architecture rather than spending excessive time on data cleanup. Real-time evaluation keeps AI models in check in dynamic environments. By feeding real-time performance data back into the system, developers can quickly adjust parameters to improve the model. ... As your data volumes and transaction rates increase, your APIs must scale accordingly. Performance issues like latency or downtime can disrupt AI training and real-time processing. To be responsive under heavy loads, design APIs with load balancing, caching, and built-in redundancy to maintain consistent performance during peak use. 


Applying Behavioral Economics to Phishing and Social Engineering Attacks

It’s all about deeply and thoroughly understanding human behavior and how these behaviors are impacted by influences that use cognitive biases, emotions, social influences, and contextual factors to drive decisions. Bad actors in the world of cybersecurity also prey upon these human tendencies to drive actions that put organizations at risk. ... Humans are social creatures that trust those they believe are authorities. They’re driven by fear, greed, and curiosity that can cloud their judgement. And they’re prone to cognitive shortcuts—biases that often drive behaviors. Understanding the power of these drivers can help organizations put strategies into place to thwart them. ... Here are some important steps that can help employees make better decisions:Training employees about the threat of cyberattacks, the form these attacks generally take, and their role in helping to avert them is an important first step. Training should be ongoing, not a single instance or once a year event. Phishing simulations have proven to be a very effective way to tangibly reduce security breakdowns. These simulations serve to test employee awareness and identify areas of opportunity for improvement. Strong authentication measures can help keep accounts secure by requiring two or more methods of identification and verification—muti-factor authentication—before allowing access to information or systems.


Why Digital Projects Need Transparency and Accountability

As a CIO, it is easy to underestimate the time it will take to build forward. In the public sector, this takes longer due to inherent risk aversion. In my first few months at DWP, I felt I was making a difference, but after the first few months, the size of the prize began to take its toll and the risk factors of going forward began to set in. As CIOs, it is our role to persuade, influence and keep in mind where we are trying to get to. We landed that vision with the senior team but DWP's size and geographic spread made it harder to get the spokes of the business to hear the same story and grasp the same benefits. If I had my time again, I would spend more time with the business, less at the center and try to build momentum that was unstoppable. As I completed my first 100 days in the CIO role at Segro, one of the key takeaways from DWP was making sure the digital leadership team knew how to act together. In my new role, I am able to replicate that at a faster pace. Brand identity matters. At Segro, we are not known as the digital team, and I am striving to change that. The organization will benefit from unifying its understanding of technology, transformation and data. 


Navigating Europe’s AI Code of Practice Before the Clock Runs Out

The Code of Practice for general-purpose AI demonstrates a sincere effort to get the details right. Yet, in a rush to cover every contingency, it risks overlooking the bigger picture: spurring the next generation of AI-driven breakthroughs that can speed up drug discovery, modernize public services, and let small farmers use new predictive tools for planting and harvesting. Innovation is a delicate process, especially in emerging areas like large-scale language models or real-time climate analytics. Europe possesses the scientific expertise and market size to shape a future where these tools become transformative assets in every corner of the continent. But that future hinges on how carefully policymakers, industry players, and civil society calibrate the rules. ... Europe’s AI revolution will not happen on autopilot. Real progress demands revamping processes, investing in talent, and scaling up what works. The public sector must also move faster if Europe is to modernize healthcare, education, and core government services. Tangled or rigid rules risk derailing Europe’s ambitions. Europe’s digital regulations already weigh heavily on businesses. Over the past 25 years, the number of economy-wide laws doubled, and the EU has rolled out close to 100 tech-focused laws. High-minded ideals often mix with fragmented enforcement and overlapping rules.


Seven Common Reasons Why Data Science Projects Fail

Large organizations may own hundreds of data assets spread across sprawling, multi-faceted IT infrastructures. Unless they have a detailed, continuously updated data catalog in place that tracks all of those assets – which many don’t – simply finding the data that the team needs to complete a project can present a major challenge. Here again, however, tools and techniques are available that can help. The major solution is data discovery software, which can automatically identify data resources, including those that are not documented. ... Too often, businesses decide that they want to do something with their data, but they don’t know exactly what. For example, they might establish a high-level goal like using data-derived insights to grow revenue, without determining exactly which types of revenue-related challenges they want to solve with help from data. Avoiding this pitfall is simple: You need to articulate precise deliverables and outcomes at the start of your project. There’s always room to adjust the details a bit once a project is underway, but you should know from the beginning what the overarching outcomes of the project should be. ... A final key challenge that can thwart data science project success is the failure to understand what the goals of data science are, and which methodologies and resources data science requires.


What’s changing the rules of enterprise AI adoption for IT leaders

As model costs fall and the value from AI migrates up to the application layer, enterprises are going to have even greater choice in business solutions, either from third parties or those developed inhouse. For CIOs with access to the right resources, building applications internally is now a more realistic proposition. This becomes increasingly attractive in the context of complex business processes that may be unique to enterprises. As the costs of running models fall to near zero, the ROI equation shifts dramatically. According to Forrester Research, the ability to run hyper-efficient models like DeepSeek locally on PCs opens up a new era of edge intelligence, which businesses can deploy across organizations. “The real value in AI isn’t just in building bigger models, but innovating on top of them and in implementing them efficiently,” says Devesh Mishra, president of CoreAI at digital transformation specialists Keystone. “Companies that pair foundation model advancements with deep business and operational expertise will lead the next phase of AI-driven ROI.” This deep understanding of industry verticals and their specific issues and needs will define success for many vendors as they increasingly compete with inhouse development teams. 


Rowing in the Same Direction: 6 Tips for Stronger IT and Security Collaboration

Due to market dominance, many software vendors focus on Windows, but IT fleets today include a mix of Chromebooks, Linux systems and Apple devices. Security and IT teams must recognize that the weakest endpoint determines the overall defense posture. By ensuring IT and security teams are aligned on what’s in the environment, you can break down silos and work together toward shared security goals, such as zero-trust implementation. ... Security and IT teams should collaborate to ensure policies protect the overall business mission, not just the bottom line. For example, if security requires an agent to collect telemetry for advanced analysis (e.g., CrowdStrike, Halcyon, etc.), what’s the performance impact on endpoints? If the agent is running AI/ML workloads, how is it optimized for performance on XPU and non-XPU systems? IT fleet leaders care about security BUT they also demand top performance and battery life from devices. Both security and IT teams together can align solutions that offer best-in-class security without degrading fleet performance. ... Ownership in IT and security is one of the hardest challenges to solve. In many cases, responsibility over cloud workloads, applications and ephemeral systems isn’t always clearly defined. 


Daily Tech Digest - February 16, 2025


Quote for the day:

"Leaders should influence others in such a way that it builds people up, encourages and edifies them so they can duplicate this attitude in others." -- Bob Goshen


A look under the hood of transfomers, the engine driving AI model evolution

Depending on the application, a transformer model follows an encoder-decoder architecture. The encoder component learns a vector representation of data that can then be used for downstream tasks like classification and sentiment analysis. The decoder component takes a vector or latent representation of the text or image and uses it to generate new text, making it useful for tasks like sentence completion and summarization. For this reason, many familiar state-of-the-art models, such the GPT family, are decoder only. Encoder-decoder models combine both components, making them useful for translation and other sequence-to-sequence tasks. For both encoder and decoder architectures, the core component is the attention layer, as this is what allows a model to retain context from words that appear much earlier in the text. ... Currently, transformers are the dominant architecture for many use cases that require LLMs and benefit from the most research and development. Although this does not seem likely to change anytime soon, one different class of model that has gained interest recently is state-space models (SSMs) such as Mamba. This highly efficient algorithm can handle very long sequences of data, whereas transformers are limited by a context window.


McKinsey On Return To Office: Leaders Are Focused On The Wrong Thing

Unsurprisingly, older employees report higher satisfaction with on-site work than their younger colleagues. Nevertheless, employees across all work models report similar satisfaction levels, which debunks the belief that bringing people back in person automatically enhances engagement or retention. Worse still, leaders consistently overestimate their organizations’ maturity regarding the very factors used to justify returning to the office. ... The balance of power may have shifted back to bosses, but, as Voltaire said first and Spider-Man famously learns from Uncle Ben, “with great power comes great responsibility.” No matter what workplace model a given employee finds themselves in today, the past few years likely opened their eyes to the power of choice and flexibility and the chasm between modern hospitality and retail-oriented experiences and the vibrancy and community in a traditional office. ... So employees believe they are doing the work, and they may accept that flexibility is a reward for objectively high performance. If executives believe the purpose of the office is to accelerate innovation, connectivity, and mentoring, they are on the hook to ensure it does. Leaders must model new behaviors, invest in workplace experience, and learn to measure outcomes without a bias for presence. Employees may quit as soon as the power pendulum swings back.


8 tips for being a more decisive leader

“Clarity is what is expected from a leader,” says Malhotra. “Clarity of vision, clarity in strategy, clarity of plan, clarity in the process, and clarity in how to measure success.” Showing up with an answer is not as important to the decision as bringing clarity to the process. “As a leader, you’re the force multiplier for your organization,” he says. “Force multiplying is a vector quantity, not a scalar quantity. It’s a vector quantity because the direction is very important. It’s not just the magnitude. It’s the direction, too. So being a force multiplier requires that you are clear when it comes to the end state you are trying to achieve.” ... “There are two things you have to consider: the urgency and the importance of the decision,” says Efrain Ruh, field CTO for Continental Europe at Digitate. If something is complex and important, take your time and gather as much information as possible. But if it is a decision that is easy to come back from, he says, “I try not to go too deep.” “There are ‘single-door decisions’ and ‘double-door decisions,’” agrees Malhotra. When it’s a single-door decision, you can never come back through that door after you have walked through it. ... When you step into a leadership role, you begin to see everything from a high-level strategy point of view. But your decisions will often affect people with their boots on the ground.


Can English Dethrone Python as Top Programming Language?

IDC predicts that by 2028, natural language will become the most widely used programming language, with developers using it to create 70% of net-new digital solutions. (Source: IDC FutureScape: Worldwide Developer and DevOps 2025 Predictions) “I actually think that the best phrasing of this prediction would be to replace ‘natural language’ with ‘English’ because of the dominance of English as a spoken and written language worldwide,” Dayaratna said. Moreover, he said he believes that in four to five years, developers will increasingly go to a chatbot-like interface and use natural language to produce digital solutions. Meanwhile, code will be used to innovate on the technology substrate that enables this kind of technology. “In other words, we’re not far from a world that witnesses the demise of commercial off-the-shelf software simply because it will be so easy to create such software, in a custom way, for an organization’s business processes,” Dayaratna said. Hence, he explained that we are seeing the emergence of what Amjad Masad, CEO of Replit, called the era of “personal software.” “Just as the Mac inaugurated personal computing in 1984, generative AI has initiated the era of ‘personal software’ that recognizes the specificity of individual and organizational preferences,” Dayaratna said.


What is anomaly detection? Behavior-based analysis for cyber threats

“Anomaly detection is the holy grail of cyber detection where, if you do it right, you don’t need to know a priori the bad thing that you’re looking for,” Bruce Potter, CEO and founder of Turngate, tells CSO. “It’ll just show up because it doesn’t look like anything else or doesn’t look like it’s supposed to. People have been tilting at that windmill for a long time, since the 1980s, trying to figure out what normal is so they can look for deviations from it to find all the bad things happening in their enterprises.” ... Although predicated on advanced math concepts, anomaly detection, or as the NIST Cybersecurity Framework 2.0 calls it, “adverse event analysis,” has over the past two decades been incorporated into a wide range of cybersecurity tools, including endpoint detection and response (EDR), firewall, and security information and event management (SIEM) tools. “In general, you can split the detection universe into two halves,” Potter says. “One is finding known bads, and then one is finding things that might be bad. Known bads are typically like a signature base where I know very specifically if I see this file or this exact thing happened on the system, it’s bad.” Known bads are typically flagged by fundamental cybersecurity tools.


Open Source AI Models: Perfect Storm for Malicious Code, Vulnerabilities

Executable data files are not the only threats, however. Licensing is another issue: While pretrained AI models are frequently called "open source AI," they generally do not provide all the information needed to reproduce the AI model, such as code and training data. Instead, they provide the weights generated by the training and are covered by licenses that are not always open source compatible. Creating commercial products or services from such models can potentially result in violating the licenses, says Andrew Stiefel, a senior product manager at Endor Labs. "There's a lot of complexity in the licenses for models," he says. "You have the actual model binary itself, the weights, the training data, all of those could have different licenses, and you need to understand what that means for your business." Model alignment — how well its output aligns with the developers' and users' values — is the final wildcard. DeepSeek, for example, allows users to create malware and viruses, researchers found. Other models — such as OpenAI's o3-mini model, which boasts more stringent alignment — has already been jail broken by researchers. These problems are unique to AI systems and the boundaries of how to test for such weaknesses remains a fertile field for researchers, says ReversingLabs' Pericin.


Risk Matters: Cyber Risk and AI – The Changing Landscape

Although AI assists organizations defend against cyber-attacks, it is a double-edged sword. More to the point, AI is also providing cyber attackers with an array of cost-efficient techniques that facilitate their cyber-attacks. Sophisticated AI-generated phishing attacks, social engineering attacks, and ransomware attacks are just a few of the ways AI has made the cyber-attack landscape more lethal. AI-generated models used by cyber attackers and cyber defenders have been evolving at a rapid pace. As a result, the strategic interactions between cyber attackers and cyber defenders have become more automated, more dynamic, more adaptive, and more complex. These developments have increased, and substantially changed, the game-theoretic aspects associated with cyber risk. ... Besides considering the total amount to spend on cybersecurity-related activities, a subsidiary question for organizations to answer is: How much of our organization’s cybersecurity-related budget should be devoted to developing and implementing AI models designed to reduce the likelihood of a cyber incident? In answering this subsidiary question, organizations need to consider the costs associated with the AI models.


Juniper CEO: ‘I am disappointed and somewhat puzzled’ by DOJ merger rejection

“They’re taking such a narrow view of the total transaction, which is the wireless line segment, a relatively small part of Juniper’s business, a small part of HPE’s business. And even if you do take a look at the wireless segment, you know we’re talking about a very competitive area with eight or nine different competitors. It’s unfortunate that we’re in the situation that we’re in, but that said, that’s okay. We’re prepared to take it to court and to prove our case and ultimately, hopefully, prevail,” Rahim said. HPE and Juniper met with the DOJ several times to go over the purchase, but the companies had no inclination the DOJ would go the direction it did—certainly with regards to its focus on the wireless market, Rahim said. The DOJ issued a Complaint “that ignores the reality that HPE and Juniper are two of at least ten competitors with comparable offerings and capabilities fighting to win customers every day,” the companies wrote. “A Complaint whose description of competitive dynamics in the wireless local area networking (WLAN) space is divorced from reality; and a Complaint that contradicts the conclusions reached by antitrust regulators around the world that have unconditionally cleared the transaction.”


The Benefits of the M&A Frenzy in Fraud Solutions

With businesses looking to reduce the number of vendors they work with to lower integration costs, David Mattei, strategic advisor at Datos Insights, expects "a higher momentum of M&A activities in 2025 as vendors race to grow." "Single-solution vendors have a harder time competing in today's world," and small to medium-sized single solution vendors "are likely to be acquired," Mattei said. LexisNexis' acquisition of IDVerse in December 2024 is an example of this this trend. ... Fraud executives agree that the most pragmatic approach today is proactive communication and awareness campaigns, and the data supports their effectiveness. However, the most anticipated and potentially effective solution is consortia-based fraud detection, combining risk signals from both sending and receiving financial institutions, Fooshee told Information Security Media Group. The challenge lies in overcoming resistance to information sharing - from fraud teams, compliance, legal and regulators - because of concerns over data integrity, integration complexities and privacy restrictions. Interestingly, markets most affected by scams and with simpler regulatory landscapes are finding ways to navigate these barriers more effectively.


Apple’s emotional lamp and the future of robots

It’s clear that Apple’s lamp is programmed to move in a way that deludes users into believing that the it has internal states that it doesn’t actually have. ... Apple’s lamp research definitely sheds light on where our interaction with robots may be heading—a new category of appliance that might well be called the “emotional robot.” A key component of the research was a user study comparing how people perceived a robot using functional and expressive movements versus one that uses only functional movements. ... The biggest takeaway from Apple’s ELEGNT research is likely that neither a human-like voice nor a human-like body, head, or face is required for a robot to successfully trick a human into relating to it as a sentient being with internal thoughts, feelings, and emotions. ELEGNT is not a prototype product; it is instead a lab and social experiment. But that doesn’t mean a product based on this research will not soon be available on a desktop near you. ... Apple is developing a desktop robot project, codenamed J595, and is targeting a launch within two years. According to reports based on leaks, the robot might look a little like Apple’s iMac G4, which was a lamp-like form factor featuring a screen at the end of a moveable “arm.”