Showing posts with label facial recognition. Show all posts
Showing posts with label facial recognition. Show all posts

Daily Tech Digest - January 26, 2026


Quote for the day:

"When I finally got a management position, I found out how hard it is to lead and manage people." -- Guy Kawasaki



Stop Choosing Between Speed and Stability: The Art of Architectural Diplomacy

In contemporary business environments, Enterprise Architecture (EA) is frequently misunderstood as a static framework—merely a collection of diagrams stored digitally. In fact, EA functions as an evolving discipline focused on effective conflict management. It serves as the vital link between the immediate demands of the present and the long-term, sustainable objectives of the organization. To address these challenges, experienced architects employ a dual-framework approach, incorporating both W.A.R. and P.E.A.C.E. methodologies. At any given moment, an organization is a house divided. On one side, you have the product owners, sales teams, and innovators who are in a state of perpetual W.A.R. (Workarounds, Agility, Reactivity). They are facing the external pressures of a volatile market, where speed is the only currency and being "first" often trumps being "perfect." To them, architecture can feel like a roadblock—a series of bureaucratic "No’s" that stifle the ability to pivot. On the other side, you have the operations, security, and finance teams who crave P.E.A.C.E. (Principles, Efficiency, Alignment, Consistency, Evolution). They see the long-term devastation caused by unchecked "cowboy coding" and fragmented systems. They know that without a foundation of structural integrity, the enterprise will eventually collapse under the weight of its own complexity, turning a fast-moving startup into a sluggish, expensive legacy giant.


Why Identity Will Become the Ultimate Control Point for an Autonomous World in 2026

The law of unintended consequences will dominate organisational cybersecurity in 2026. As enterprises increase their reliance on autonomous AI agents with minimal human oversight, and as machine identities multiply, accountability will blur. The constant tension between efficiency and security will fuel uncontrolled privilege sprawl forcing organisations to innovate not only in technology, but in governance. ... Attackers will exploit this shift, embedding malicious prompts and compromising automated pipelines to trigger actions that bypass traditional controls. Conventional privileged access management and identity access management will no longer be sufficient. Continuous monitoring, adaptive risk frameworks, and real-time credential revocation will become essential to manage the full lifecycle of AI agents. At the same time, innovation in governance and regulation will be critical to prevent a future defined by “runaway” automation. Two years after NIST released its first AI Risk Management Framework, the framework remains voluntary globally, and adoption has been inconsistent since no jurisdiction mandates it. Unless governance becomes a requirement not just a guideline, organisations will continue to treat it as a cost rather than a safeguard. Regulatory frameworks that once focused on data privacy will expand to cover AI identity governance and cyber resilience, mandating cross-region redundancy and responsible agent oversight.


The human paradox at the center of modern cyber resilience

The problem for security leaders is that social engineering is still the most effective way to bypass otherwise robust technical controls. The problem is becoming more acute as threat actors increasingly use AI to deliver compelling, personalized, and scalable phishing attacks. While many such incidents never reach public attention, an attempt last year to defraud WPP used AI-generated video and voice cloning to impersonate senior executives in a highly convincing deepfake meeting. Unfortunately, the risks don’t end there. Even with strong technical controls and a workforce alert to social engineering tactics, risk also comes from employees who introduce tools, devices or processes that fall outside formal IT governance. ... What’s needed instead is a shift in both mindset and culture, where employees understand not just what not to do, but why their day-to-day decisions, which tools they trust, how they handle unexpected requests, and when they choose to slow down and double check something rather than act on instinct genuinely matter. From a leadership perspective, it’s much better to foster a culture which people feel comfortable reporting suspicious activity without fear of blame, rather than an environment where taking the risk feels like the easier option. ... Instead of acting quickly to avoid delaying work, the employee pauses because the culture has normalized slowing down when something seems unusual. They also know exactly how to report or verify because the processes are familiar and straightforward, with no confusion about who to contact or whether they’ll be blamed for raising a false alarm.


Is cloud backup repatriation right for your organization?

Cost is, without a doubt, one of the major reasons for repatriation. Cloud providers have touted the affordability of the cloud over physical data storage, but getting the most bang for your buck from using the cloud requires due diligence to keep costs down. Even major corporations struggle with this issue. The bigger the environment, the more complex it is to accurately model and cost, particularly with multi-cloud environments. And as we know, cloud is incredibly easy to scale up. Keeping with our data theme, understanding the costing model of data backup and bringing back data from deep storage is extremely expensive when done in bulk. Software must be expertly tuned to use the provider storage tier stack efficiently, or massive costs can be incurred. On-premises, the storage costs are already sunk. The data is also local (assuming local backup with remote replication for offsite backup,) so restoring data and services happens quicker. ... Straight-up backup to the cloud can be cheaper and more effective than on-site backups. It also passes a good portion of the management overhead to the cloud provider, such as hardware support, general maintenance and backup security. As we discussed, however, putting backups in another provider's hands might mean longer response and recovery times. Smaller businesses often have an immature environment and cloud backup can be a boon, but larger businesses might consider repatriation if the infrastructure for on-site is available. 


Who Approved This Agent? Rethinking Access, Accountability, and Risk in the Age of AI Agents

AI agents are different. They operate with delegated authority and can act on behalf of multiple users or teams without requiring ongoing human involvement. Once authorized, they are autonomous, persistent, and often act across systems, moving between various systems and data sources to complete tasks end-to-end. In this model, delegated access doesn’t just automate user actions, it expands them. Human users are constrained by the permissions they are explicitly granted, but AI agents are often given broader, more powerful access to operate effectively. As a result, the agent can perform actions that the user themselves was never authorized to take. ... It’s no wonder existing IAM assumptions break down. IAM assumes a clear identity, a defined owner, static roles, and periodic reviews that map to human behavior. AI agents don’t follow those patterns. They don’t fit neatly into user or service account categories, they operate continuously, and their effective access is defined by how they are used, not how they were originally approved. Without rethinking these assumptions, IAM becomes blind to the real risk AI agents introduce. ... When agents operate on behalf of individual users, they can provide the user access and capabilities beyond the user’s approved permissions. A user who cannot directly access certain data or perform specific actions may still trigger an agent that can. The agent becomes a proxy, enabling actions the user could never execute on their own. These actions are technically authorized - the agent has valid access. However, they are contextually unsafe.


The CISO’s Recovery-First Game Plan

CISOs must be on top of their game to protect an organization’s data. Lapses in cybersecurity around the data infrastructure can be devastating. Therefore, securing infrastructure needs to be air-tight. The “game plan” that leads a CISO to success must have the following elements: Immutable snapshots; Logical air-gapping; Fenced forensic environment;  Automated cyber protection; Cyber detection; and Near-instantaneous recovery. These six elements constitute the new wave in protecting data: next-generation data protection. There has already been a shift from modern data protection to this substantially higher level of next-gen data protection. A smart CISO would not knowingly leave their enterprise weaker. This is why adoption of automated cyber protection and cyber detection, built right into enterprise storage infrastructure, is increasing, as part of this move to next-gen data protection. Automated cyber protection and cyber detection are becoming a basic requirement for all enterprises that want to eliminate the impact of cyberattacks. All of this is vital for the rapid recovery of data within an enterprise after a cyberattack. ... But what would be smart for CISOs to do is to make adjustments based on what they currently have protecting their storage infrastructure. For example, even in a mixed storage environment, you can deploy automated cyber protection through software. You don’t need to rip and replace the cybersecurity systems and applications that you already have in place. 


ICE’s expanding use of FRT on minors collides with DHS policy, oversight warnings, law

At the center of the case is DHS’s use of Mobile Fortify, a field-deployed application that scans fingerprints and performs facial recognition, then compares collected data against multiple DHS databases, including CBP’s Traveler Verification Service, Border Patrol systems, and Office of Biometric Identity Management’s Automated Biometric Identification System. The complaint alleges DHS launched Mobile Fortify around June 2025 and has used it in the field more than 100,000 times since launch. Unlike CBP’s traveler entry-exit facial recognition program in which U.S. citizens can decline participation and consenting citizens’ photos are retained only until identity verification, Mobile Fortify is not restricted to ports of entry and is not meaningfully limited as to when, where, or from whom biometrics may be taken. The lawsuit cites a DHS Privacy Threshold Analysis stating that ICE agents may use Mobile Fortify when they “encounter an individual or associates of that individual,” and that agents “do not know an individual’s citizenship at the time of initial encounter” and use Mobile Fortify to determine or verify identity. The same passage, as quoted in the complaint, authorizes collection in identifiable form “regardless of citizenship or immigration status,” acknowledging that a photo captured could be of a U.S. citizen or lawful permanent resident.


From Incident to Insight: How Forensic Recovery Drives Adaptive Cyber Resilience

The biggest flaw is that traditional forensics is almost always reactive, and once complete, it ultimately fails to deliver timely insights that are vital to an organization. For example, analysts often begin gathering logs, memory dumps, and disk images only after a breach has been detected, by which point crucial evidence may be gone. Further compounding matters is the fact that the process is typically fragmented, with separate tools for endpoint detection, SIEM, and memory analysis that make it harder to piece together a coherent narrative. ... Modern forensic approaches capture evidence at the first sign of suspicious activity — preserving memory, process data, file paths, and network activity before attackers can destroy them. The key is storing artifacts securely outside the compromised environment, which ensures their integrity and maintains the chain of custody. The most effective strategies operate on parallel tracks. The first is dedicated to restoring operations and delivering forensic artifacts, while the other begins immediate investigations. By integrating forensic, endpoint, and network evidence collection together, silos and blind spots are replaced with a comprehensive and cohesive picture of the incident. ... When integrated into the incident response process, forensic recovery investigations begin earlier, compliance reporting is backed by verifiable facts, and legal defenses are equipped with the necessary evidence. 


Memgraph founder: Don’t get too loose with your use of MCP

“It is becoming almost universally accepted that without strong curation and contextual grounding, LLMs can misfire, misuse tools, or behave unpredictably. Let me clarify what I mean by ‘tool’ i.e. external capabilities provided to the LLM, ranging from search, calculations and database queries to communication, transaction execution and more, with each exposed as an action or API endpoint through MCP.” ... “But security isn’t actually the main possible MCP stumbling block. Perversely enough, by giving the LLM more capabilities, it might just get confused and end up charging too confidently down a completely wrong path,” said Tomicevic. “This problem mirrors context-window overload: too much information increases error rates. Developers still need to carefully curate the tools their LLMs can access, with best practice being to provide only a minimal, essential set. For more complex tasks, the most effective approach is to break them into smaller subtasks, often leveraging a graph-based strategy.” ... The truth that’s coming out of this discussion might lead us to understand that the best of today’s general-purpose models, like those from OpenAI, are trained to use built-in tools effectively. But even with a focused set of tools, organisations are not entirely out of the woods. Context remains a major challenge. Give an LLM a query tool and it runs queries; but without understanding the schema or what the data represents, it won’t generate accurate or meaningful queries.


Speaking the Same Language: Decoding the CISO-CFO Disconnect

On the surface, things look good: 88% of security leaders believe their priorities match business goals, and 55% of finance leaders view cybersecurity as a core strategic driver. However, the conviction is shallow. ... For CISOs, the report is a wake-up call regarding their perceived business acumen. While security leaders feel they are working hard to protect the organization, finance remains skeptical of their execution. The translation gap: Only 52% of finance leaders are "very confident" that their security team can communicate business impact clearly. Prioritization doubts: Just 43% of finance leaders feel very confident that security can prioritize investments based on actual risk. Strategy versus operations: Only 40% express full confidence in security's ability to align with business strategy. ... Chief Financial Officers are increasingly taking responsibility for enterprise risk management and cyber insurance, yet they feel they are operating with incomplete data. Efficiency concerns: Only 46% of finance leaders are very confident that security can deliver cost-efficient solutions. Perception of value: CFOs are split, with 38% viewing cybersecurity as a strategic enabler, while another 38% still view it as a cost center. ... "When security is done right, it doesn't slow the business down—it gives leadership the confidence to move faster. And to do that, you have to be able to connect with your CFO and COO through stories. Dashboards full of red, yellow, and green don't help a CFO," said Krista Arndt, 

Daily Tech Digest - December 02, 2025


Quote for the day:

"I am not a product of my circumstances. I am a product of my decisions." -- Stephen Covey



The CISO’s paradox: Enabling innovation while managing risk

When security understands revenue goals, customer promises and regulatory exposure, guidance becomes specific and enabling. Begin by embedding a security liaison with each product squad so there is always a known face to engage in identity, data flows, logging and encryption decisions as they form. We should not want to see engineers opening two-week tickets for a simple question. There should be open “office hours,” chat channels and quick calls so they can get immediate feedback on decisions like API design, encryption requirements and regional data moves. ... Show up at sprint planning and early design reviews to ask the questions that matter — authentication paths, least-privilege access, logging coverage and how changes will be monitored in production through SIEM and EDR. When security officers sit at the same table, the conversation changes from “Can we do this?” to “How do we do this securely?” and better outcomes follow from day one. ... When developers deploy code multiple times a day, a “final security review” before launch just wouldn’t work. This traditional, end-of-line gating model doesn’t just block innovation but also fails to catch real-world risks. To be effective, security must be embedded during development, not just inspected after. ... This discipline must further extend into production. Even with world-class DevSecOps, we know a zero-day or configuration drift can happen. 


Resilience Means Fewer Recoveries, Not Faster Ones

Resilience has become one of the most overused words in management. Leaders praise teams for “pushing through” and “bouncing back,” as if the ability to absorb endless strain were proof of strength. But endurance and resilience are not the same. Endurance is about surviving pressure. Resilience is about designing systems so people don’t break under it. Many organizations don’t build resilience; they simply expect employees to endure more. The result is a quiet crisis of exhaustion disguised as dedication. Teams appear committed but are running on fumes. ... In most organizations, a small group carries the load when things get tough — the dependable few who always say yes to the most essential tasks. That pattern is unsustainable. Build redundancy into the system by cross-training roles, rotating responsibilities, and decentralizing authority. The goal isn’t to reduce pressure to zero; it’s to distribute it evenly enough so that no one person becomes the safety net for everyone else. ... Too many managers equate resilience with recovery, celebrating those who saved the day after the crisis is over. But true resilience shows up before the crisis hits. Observe your team to recognize the people who spot problems early, manage risks quietly, or improve workflows so that breakdowns don’t happen. Crisis prevention doesn’t create dramatic stories, but it builds the calm, predictable environment that allows innovation to thrive.


Facial Recognition’s Trust Problem

Surveillance by facial recognition is almost always in a public setting, so it’s one-to-many. There is a database and many cameras (usually a large number of cameras – an estimated one million in London and more than 30,000 in New York). These cameras capture images of people and compare them to the database of known images to identify individuals. The owner of the database may include watchlists comprising ‘people of interest’, so the ability to track persons of interest from one camera to another is included. But the process of capturing and using the images is almost always non-consensual. People don’t know when, where or how their facial image was first captured, and they don’t know where their data is going downstream or how it is used after initial capture. Nor are they usually aware of the facial recognition cameras that record their passage through the streets. ... Most people are wary of facial recognition systems. They are considered personally intrusive and privacy invasive. Capturing a facial image and using it for unknown purposes is not something that is automatically trusted. And yet it is not something that can be ignored – it’s part of modern life and will continue to be so. In the two primary purposes of facial recognition – access authentication and the surveillance of public spaces – the latter is the least acceptable. It is used for the purpose of public safety but is fundamentally insecure. What exists now can be, and has been, hijacked by criminals for their own purposes. 


The Urgent Leadership Playbook for AI Transformation

Banking executives talk enthusiastically about AI. They mention it frequently in investor presentations, allocate budgets to pilot programs, and establish innovation labs. Yet most institutions find themselves frozen between recognition of AI’s potential and the organizational will to pursue transformation aggressively. ... But waiting for perfect clarity guarantees competitive disadvantage. Even if only 5% of banks successfully embed AI across operations — and the number will certainly grow larger — these institutions will alter industry dynamics sufficiently to render non-adopters progressively irrelevant. Early movers establish data advantages, algorithmic sophistication, and operational efficiencies that create compounding benefits difficult for followers to overcome. ... The path from today’s tentative pilots to tomorrow’s AI-first institution follows a proven playbook developed by "future-built" companies in other sectors that successfully generate measurable value from AI at enterprise scale. ... Scaling AI requires reimagining organizational structures around technology-human collaboration based on three-layer guardrails: agent policy layers defining permissible actions, assurance layers providing controls and audit trails, and human responsibility layers assigning clear ownership for each autonomous domain.


Creative cybersecurity strategies for resource-constrained institutions

There’s a well-worn phrase that gets repeated whenever budgets are tight: “We have to do more with less.” I’ve never liked it because it suggests the team wasn’t already giving maximum effort. Instead, the goal should be to “use existing resources more effectively.” ... When you understand the users’ needs and learn how they want to work, you can recommend solutions that are both secure and practical. You don’t need to be an expert in every research technology. Start by paying attention to the services offered by cloud providers and vendors. They constantly study user pain points and design tools to address them. If you see a cloud service that makes it easier to collect, store, or share scientific data, investigate what makes it attractive. ... First, understand how your policies and controls affect the work. Security shouldn’t be developed in a vacuum. If you don’t understand the impact on researchers, developers, or operational teams, your controls may not be designed and implemented in a manner that helps enable the business. Second, provide solutions, don’t just say no. A security team that only rejects ideas will be thought of as a roadblock, and users will do their best to avoid engagement. A security team that helps people achieve their goals securely becomes one that is sought out, and ultimately ensures the business is more secure.


Architecting Intelligence: A Strategic Framework for LLM Fine-Tuning at Scale

As organizations race to harness the transformative power of Large Language Models, a critical gap has emerged between experimental implementations and production-ready AI systems. While prompt engineering offers a quick entry point, enterprises seeking competitive advantage must architect sophisticated fine-tuning pipelines that deliver consistent, domain-specific intelligence at scale. The landscape of LLM deployment presents three distinct approaches for fine-tuning, each with architectural implications. The answer lies in understanding the maturity curve of AI implementation. ... Fine-tuning represents the architectural apex of AI implementation. Rather than relying on prompts or external knowledge bases, fine-tuning modifies the AI model itself by continuing its training on domain-specific data. This embeds organizational knowledge, reasoning patterns, and domain expertise directly into the model’s parameters. Think of it this way: a general-purpose AI model is like a talented generalist who reads widely but lacks deep expertise. ... The decision involves evaluating several factors. Model scale matters because larger models generally offer better performance but demand more computational resources. An organization must balance the quality improvements of a 70-billion-parameter model against the infrastructure costs and latency implications. 


How smart tech innovation is powering the next generation of the trucking industry

Real-time tracking has now become the backbone of digital trucking. These systems provide the real time updates on the location of vehicles, fuel consumption, driving behavior and performance of the engine. Fleets also make informed decisions based on data to directly influence operational efficiencies. Furthermore, the IoT-enabled ‘one app’ solution monitors cargo temperature, location, and overall load conditions throughout the journey. ... Now, with AI driven algorithms, fleet managers anticipate most optimal routes via analysis of historical demand, weather patterns, and traffic. AI-powered intelligent route optimization applications allow fleets to optimize fuel usage and lower travel times. Additionally, with predictive maintenance capabilities, trucking companies are less concerned about vehicle failures, because a more proactive approach is used. AI tools spot anomalies in engine data and warn the fleet owners before expensive vehicle failure occurs, improving the overall fleet operations. ... The trucking industry is transforming faster than ever before. Technologies are turning every vehicle into a connected network and digital asset. Fleets can forecast demand, optimize routes, preserve cargo quality, and ensure safety at every step. The smarter goals align seamlessly with cost saving opportunities as logistics aggregators transition from the manual heavy paperwork to the digital locker convenience


Why every business needs to start digital twinning in 2026

Digital twins have begun to stand out because they’re not generic AI stand-ins; at their best they’re structured behavioural models grounded in real customer data. They offer a dependable way to keep insights active, consistent and available on demand. That is where their true strategic value lies. More granularly, the best performing digital twins are built on raw existing customer insights – interview transcripts, survey results, and behavioural data. But rather than just summarising the data, they create a representation of how a particular individual tends to think. Their role isn’t to imitate someone’s exact words, but to reflect underlying logic, preferences, motivations and blind spots. ... There’s no denying the fact that organisations have had a year of big promises and disappointing AI pilots, with the result that businesses are far more selective about what genuinely moves the needle. For years, digital twinning has been used to model complex systems in engineering, aerospace and manufacturing, where failure is expensive and iteration must happen before anything becomes real. With the rise of generative AI, the idea of a digital twin has expanded. After a year of rushed AI pilots and disappointing ROI, leaders are looking for approaches that actually fit how businesses work. Digital twinning does exactly that: it builds on familiar research practices, works inside existing workflows, and lets teams explore ideas safely before committing to them.


From compliance to confidence: Redefining digital transformation in regulated enterprises

Compliance is no longer the brake on digital transformation. It is the steering system that determines how fast and how far innovation can go. ... Technology rarely fails because of a lack of innovation. It fails when organizations lack the governance maturity to scale innovation responsibly. Too often, compliance is viewed as a bottleneck. It’s a scalability accelerator when embedded early. ... When governance and compliance converge, they unlock a feedback loop of trust. Consider a payer-provider network that unified its claims, care and compliance data into a single “truth layer.” Not only did this integration reduce audit exceptions by 45%, but it also improved member-satisfaction scores because interactions became transparent and consistent. ... No transformation from compliance to confidence happens without leadership alignment. The CIO sits at the intersection of technology, policy and culture and therefore carries the greatest influence over whether compliance is reactive or proactive. ... Technology maturity alone is not enough. The workforce must trust the system. When employees understand how AI or analytics systems make decisions, they become more confident using them. ... Confidence is not the absence of regulation; it’s mastery of it. A confident enterprise doesn’t fear audits because its systems are inherently explainable. 


AI agents are already causing disasters - and this hidden threat could derail your safe rollout

Although artificial intelligence agents are all the rage these days, the world of enterprise computing is experiencing disasters in the fledgling attempts to build and deploy the technology. Understanding why this happens and how to prevent it is going to involve lots of planning in what some are calling the zero-day deliberation. "You might have hundreds of AI agents running on a user's behalf, taking actions, and, inevitably, agents are going to make mistakes," said Anneka Gupta, chief product officer for data protection vendor Rubrik. ... Gupta talked about more than just a product pitch. Fixing well-intentioned disasters is not the biggest agent issue, she said. The big picture is that agentic AI is not moving forward as it should because of zero-day issues. "Agent Rewind is a day-two issue," said Gupta. "How do we solve for these zero-day issues to start getting people moving faster -- because they are getting stuck right now." ... According to Gupta, the true problem of agent deployment is all the work that begins with the chief information security officer, CISO, the chief information officer, CIO, and other senior management to figure out the scope of agents. AI agents are commonly defined as artificial intelligence programs that have been granted access to resources external to the large language model itself, enabling the AI program to carry out a wider variety of actions. ... The real zero-day obstacle is how to understand what agents are supposed to be doing, and how to measure what success or failure would look like.

Daily Tech Digest - September 07, 2025


Quote for the day:

"The struggle you're in today is developing the strength you need for tomorrow." -- #Soar2Success


The Automation Bottleneck: Why Data Still Holds Back Digital Transformation

Even in firms with well-funded digital agendas, legacy system sprawl is an ongoing headache. Data lives in silos, formats vary between regions and business units, and integration efforts can stall once it becomes clear just how much human intervention is involved in daily operations. Elsewhere, the promise of straight-through processing clashes with manual workarounds, from email approvals and spreadsheet imports to ad hoc scripting. Rather than symptoms of technical debt, these gaps point to automation efforts that are being layered on top of brittle foundations. Until firms confront the architectural and operational barriers that keep data locked in fragmented formats, automation will also remain fragmented. Yes, it will create efficiency in isolated functions, but not across end-to-end workflows. And that’s an unforgiving limitation in capital markets where high trade volumes, vast data flows, and regulatory precision are all critical. ... What does drive progress are purpose-built platforms that understand the shape and structure of industry data from day one, moving, enriching, validating, and reformatting it to support the firm’s logic. Reinventing the wheel for every process isn’t necessary, but firms do need to acknowledge that, in financial services, data transformation isn’t some random back-office task. It’s a precondition for the type of smooth and reliable automation that prepares firms for the stark demands of a digital future.


Switching on resilience in a post-PSTN world

The copper PSTN network, first introduced in the Victorian era, was never built for the realities of today’s digital world. The PSTN was installed in the early 80s, and early broadband was introduced using the same lines in the early 90s. And the truth is, it needs to retire, having operated past its maintainable life span. Modern work depends on real-time connectivity and data-heavy applications, with expectations around speed, scalability, and reliability that outpace the capabilities of legacy infrastructure. ... Whether it’s a GP retrieving patient records or an energy network adjusting supply in real time, their operations depend on uninterrupted, high-integrity access to cloud systems and data center infrastructure. That’s why the PSTN switch-off must be seen not as a Telecoms milestone, but as a strategic resilience imperative. Without universal access upgrades, even the most advanced data centers can’t fulfil their role. The priority now is to build a truly modern digital backbone. One that gives homes, businesses, and CNI facilities alike robust, high-speed connectivity into the cloud. This is about more than retiring copper. It’s about enabling a smarter, safer, more responsive nation. Organizations that move early won’t just minimize risk, they’ll unlock new levels of agility, performance, and digital assurance.


Neither driver, nor passenger — covenantal co-creator

The covenantal model rests on a deeper premise: that intelligence itself emerges not just from processing information, but from the dynamic interaction between different perspectives. Just as human understanding often crystallizes through dialogue with others, AI-human collaboration can generate insights that exceed what either mind achieves in isolation. This isn't romantic speculation. It's observable in practice. When human contextual wisdom meets AI pattern recognition in genuine dialogue, new possibilities emerge. When human ethical intuition encounters AI systematic analysis, both are refined. When human creativity engages with AI synthesis, the result often transcends what either could produce alone. ... Critics will rightfully ask: How do we distinguish genuine partnership from sophisticated manipulation? How do we avoid anthropomorphizing systems that may simulate understanding without truly possessing it? ... The real danger isn't just AI dependency or human obsolescence. It's relational fragmentation — isolated humans and isolated AI systems operating in separate silos, missing the generative potential of genuine collaboration. What we need isn't just better drivers or more conscious passengers. We need covenantal spaces where human and artificial minds can meet as genuine partners in the work of understanding.


Facial recognition moves into vehicle lanes at US borders

According to the PTA, VBCE relies on a vendor capture system embedded in designated lanes at land ports of entry. As vehicles approach the primary inspection lane, high-resolution cameras capture facial images of occupants through windshields and side windows. The images are then sent to the VBCE platform where they are processed by a “vendor payload service” that prepares the files for CBP’s backend systems. Each image is stored temporarily in Amazon Web Services’ S3 cloud storage, accompanied by metadata and quality scores. An image-quality service assesses whether the photo is usable while an “occupant count” algorithm tallies the number of people in the vehicle to measure capture rates. A matching service then calls CBP’s Traveler Verification Service (TVS) – the central biometric database that underpins Simplified Arrival – to retrieve “gallery” images from government holdings such as passports, visas, and other travel documents. The PTA specifies that an “image purge service” will delete U.S. citizen photos once capture and quality metrics are obtained, and that all images will be purged when the evaluation ends. Still, during the test phase, images can be retained for up to six months, a far longer window than the 12-hour retention policy CBP applies in operational use for U.S. citizens.


Quantum Computing Meets Finance

Many financial-asset-pricing problems boil down to solving integral or partial differential equations. Quantum linear algebra can potentially speed that up. But the solution is a quantum state. So, you need to be creative about capturing salient properties of the numerical solution to your asset-pricing model. Additionally, pricing models are subject to ambiguity regarding sources of risk—factors that can adversely affect an asset’s value. Quantum information theory provides tools for embedding notions of ambiguity. ... Recall that some of the pioneering research on quantum algorithms was done in the 1990s by scientists like Deutsch, Shor, and Vazirani, among others. Today it’s still a challenge to implement their ideas with current hardware, and that’s three decades later. But besides hardware, we need progress on algorithms—there’s been a bit of a quantum algorithm winter. ... Optimization tasks across industries, including computational chemistry, materials science, and artificial intelligence, are also applied in the financial sector. These optimization algorithms are making progress. In particular, the ones related to quantum annealing are the most reliable scaled hardware out there. ... The most well-known case is portfolio allocation. You have to translate that into what’s known as quadratic unconstrained binary optimization, which means making compromises to maintain what you can actually compute. 


Beyond IT: How Today’s CIOs Are Shaping Innovation, Strategy and Security

It’s no longer acceptable to measure success by uptime or ticket resolution. Your worth is increasingly measured by your ability to partner with business units, translate their needs into scalable technology solutions and get those solutions to market quickly. That means understanding not just the tech, but the business models, revenue drivers and customer expectations. You don’t need to be an expert in marketing or operations, but you need to know how your decisions in architecture, tooling, and staffing directly impact their outcomes. ... Security and risk management are no longer checkboxes handled by a separate compliance team. They must be embedded into the DNA of your tech strategy. Becky refers to this as “table stakes,” and she’s right. If you’re not building with security from the outset, you’re building on sand. That starts with your provisioning model. We’re in a world where misconfigurations can take down global systems. Automated provisioning, integrated compliance checks and audit-ready architectures are essential. Not optional. ... CIOs need to resist the temptation to chase hype. Your core job is not to implement the latest tools. Your job is to drive business value and reduce complexity so your teams can move fast, and your systems remain stable. The right strategy? Focus on the essentials: Automated provisioning, integrated security and clear cloud cost governance. 


The Difference Between Entrepreneurs Who Survive Crises and Those Who Don't

Among the most underrated strategies for protecting reputation, silence holds a special place. It is not passivity; it's an intentional, active choice. Deciding not to react immediately to a provocation buys time to think, assess and respond surgically. Silence has a precise psychological effect: It frustrates your attacker, often pushing them to overplay their hand and make mistakes. This dynamic is well known in negotiation — those who can tolerate pauses and gaps often control the rhythm and content of the exchange. ... Anticipating negative scenarios is not pessimism — it's preparation. It means knowing ahead of time which actions to avoid and which to take to safeguard credibility. As Eccles, Newquist, and Schatz note in Harvard Business Review, a strong, positive reputation doesn't just attract top talent and foster customer loyalty — it directly drives higher pricing power, market valuation and investor confidence, making it one of the most valuable yet vulnerable assets in a company's portfolio. ... Too much exposure without a solid reputation makes an entrepreneur vulnerable and easily manipulated. Conversely, those with strong credibility maintain control even when media attention fades. In the natural cycle of public careers, popularity always diminishes over time. What remains — and continues to generate opportunities — is reputation. 


Ship Faster With 7 Oddly Specific devops Habits

PowerPoint can lie; your repo can’t. If “it works on my machine” is still a common refrain, we’ve left too much to human memory. We make “done” executable. Concretely, we put a Makefile (or a tiny task runner) in every repo so anyone—developer, SRE, or manager who knows just enough to be dangerous—can run the same steps locally and in CI. The pattern is simple: a single entry point to lint, test, build, and package. That becomes the contract for the pipeline. ... Pipelines shouldn’t feel like bespoke furniture. We keep a single “paved path” workflow that most repos can adopt unchanged. The trick is to keep it boring, fast, and self-explanatory. Boring means a sane default: lint, test, build, and publish on main; test on pull requests; cache aggressively; and fail clearly. Fast means smart caching and parallel jobs. Self-explanatory means the pipeline tells you what to do next, not just that you did it wrong. When a team deviates, they do it consciously and document why. Most of the time, they come back to the path once they see the maintenance cost of custom tweaks. ... A release isn’t done until we can see it breathing. We bake observability in before the first customer ever sees the service. That means three things: usable logs, metrics with labels that match our domain (not just infrastructure), and distributed traces. On top of those, we define one or two Service Level Objectives with clear SLIs—usually success rate and latency. 


Kali Linux vs Parrot OS – Which Penetration Testing Platform is Most Suitable for Cybersecurity Professionals?

Kali Linux ships with over 600 pre-installed penetration testing tools, carefully curated to cover the complete spectrum of security assessment activities. The toolset spans multiple categories, including network scanning, vulnerability analysis, exploitation frameworks, digital forensics, and post-exploitation utilities. Notable tools include the Metasploit Framework for exploitation testing, Burp Suite for web application security assessment, Nmap for network discovery, and Wireshark for protocol analysis. The distribution’s strength lies in its comprehensive coverage of penetration testing methodologies, with tools organized into logical categories that align with industry-standard testing procedures. The inclusion of cutting-edge tools such as Sqlmc for SQL injection testing, Sprayhound for password spraying integrated with Bloodhound, and Obsidian for documentation purposes demonstrates Kali’s commitment to addressing evolving security challenges. ... Parrot OS distinguishes itself through its holistic approach to cybersecurity, offering not only penetration testing tools but also integrated privacy and anonymity features. The distribution includes over 600 tools covering penetration testing, digital forensics, cryptography, and privacy protection. Key privacy tools include Tor Browser, AnonSurf for traffic anonymization, and Zulu Crypt for encryption operations.


How Artificial Intelligence Is Reshaping Cybersecurity Careers

AI-Enhanced SOC Analysts upends traditional security operations, where analysts leverage artificial intelligence to enhance their threat detection and incident response capabilities. These positions work with the existing analyst platforms that are capable of autonomous reasoning that mimics expert analyst workflows, correlating evidence, reconstructing timelines, and prioritizing real threats at a much faster rate. ... AI Risk Analysts and Governance Specialists ensure responsible AI deployment through risk assessments and adherence to compliance frameworks. Professionals in this role may hold a certification like the AIGP. This certification demonstrates that the holder can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems. This role requires foundational knowledge of AI systems and their use cases, the impacts of AI, and comprehension of responsible AI principles. ... AI Forensics Specialists represent an emerging role that combines traditional digital forensics with AI-specific environments and technology. This role is designed to analyze model behavior, trace adversarial attacks, and provide expert testimony in legal proceedings involving AI systems. While classic digital forensics focuses on post-incident investigations, preserving evidence and chain of custody, and reconstructing timelines, AI forensics specialists must additionally possess knowledge of machine learning algorithms and frameworks.

Daily Tech Digest - June 09, 2025


Quote for the day:

"Motivation gets you going and habit gets you there." -- Zig Ziglar


Architecting Human-AI Relationships: Governance Frameworks for Emotional AI Integration

The talent retention implications prove equally compelling, particularly as organizations compete for digitally native workforce demographics who view AI collaboration as a natural extension of professional relationships. ... Perhaps most significantly, healthy human-AI collaboration frameworks unleash innovation potential that traditional technology deployment approaches consistently fail to achieve. When teams feel psychologically safe in their AI partnerships—confident that transitions will be managed thoughtfully and that their emotional investment in digital collaborators is acknowledged and supported—they demonstrate a remarkable willingness to explore advanced AI capabilities, experiment with novel applications, and push the boundaries of what artificial intelligence can accomplish within organizational contexts. ... The ultimate result is organizational resilience that extends far beyond technical robustness. Comprehensive governance approaches that address technical performance and psychological factors create AI ecosystems that adapt gracefully to technological change, maintain continuity through system transitions, and sustain collaborative effectiveness across the inevitable evolution of artificial intelligence capabilities.


CISOs reposition their roles for business leadership

“The CISOs of the present and the future need to get out of being just technologists and build their influence muscle as well as their communication muscle,” Kapil says. They need to be able to “relay the technology and cyber messaging in words and meanings where a non-technologist actually understands why we’re doing what we’re doing.” ... “CISOs who are enablers can have the greatest impact on the business because they understand the business objectives,” LeMaire explains. “I like to say we don’t do cybersecurity for cybersecurity’s sake. … Ultimately, we do cybersecurity to contribute to the goals, missions, and objectives of the greater organization. When you’re an enabler that’s what you’re doing.” ... The BISO role emerged to bridge the gap between business objectives and cybersecurity oversight that has existed in many companies, Petrik says. “By acting as a liaison between business, technology, and cybersecurity teams, the BISO ensures that security measures are aligned with business strategies and integrated effectively,” he says. Digital transformation, emerging technologies, and rapid innovation are business mandates, and security teams add value and manage risk better when they are involved before a platform is selected or implemented, he says.


Balancing Safety and Security in Software-Defined Vehicles

Features such as Bluetooth, Wi-Fi, and cellular networks improve user convenience but create multiple attack vectors. For example, infotainment systems, because of their connectivity, are prime targets on software-defined vehicles. The recent Nissan LEAF hack revealed exactly this vulnerability, with researchers using the vehicle’s infotainment system as an entry point to access critical vehicle controls, including the steering. Not only can attackers gain access to data and location information, they can use vulnerable infotainment systems as an on-ramp to access other critical vehicle systems, like Advanced Driver Assistance Systems (ADAS), CAN-Bus, or key engine control units. ... Real-Time Operating Systems play a key role in the functionality of software-defined vehicles, as they enable precise, time-critical operations for systems like Electronic Control Units (ECUs). ECUs are primarily programmed in C and C++ due to the need for efficiency and performance in resource-constrained environments. ... Memory-based vulnerabilities, inherent to C/C++ programming, can be exploited to enable remote code execution, potentially compromising critical safety and performance systems. This creates serious cybersecurity and reliability concerns for vehicles. As RTOS suppliers manage numerous processes, any vulnerability in their codebase can be a gateway for attackers, increasing the likelihood of malicious exploits across the interconnected vehicle ecosystem.


The agile blueprint for simplifying performance management: Rethinking reviews for real impact

Understanding performance has a psychological side to it. Recognising this effect on performance frameworks, Rashmi suggested that imposter syndrome can be mitigated by making progress visible. “When you see your results in real time, you can’t keep criticising yourself.” The panellists encouraged managers to have personal discussions with their team members, which would help them build bonds. Rashmi highlighted this aspect, which can be leveraged through AI. “If AI says that there has been no potential feedback for the employee in the last month, then let the technology help the manager remind.” She also added, “Scaling up makes the quarterly reviews an exercise; hence, spontaneous quarterly check-ins are important.” Rashmi also advocated for weekly, human-centred check-ins, features that are integrated in HRStop, where it won’t be just about tracking project status, but to understand employees as people. “Treat it like a family discussion,” Rashmi recommended. “A touch of personal conversation builds deeper rapport.” Another aspect that came up in the discussion was coaching. Vimal emphasised that coaching must happen at all levels—from CXOs to interns. “It’s this cultural consistency that builds trust, retention, and performance”, he added.


Is this the perfect use case for police facial recognition?

First, as the judge noted, “fortunately the technology available prevented physical contact going further”. Availability is important here, not just in terms of the equipment being accessible; it has a specific legal element too. Where the technological means to prevent inhumane or degrading treatment are reasonably available to the police, the law in England and Wales may not just permit the use of remote biometric technology, it may even require it. I’m unaware of anyone relying on this human rights argument yet and we won’t know if these conditions would have met that threshold. ... Second, the person was on the watchlist because he was subject to a court order. This was not the public under ‘general surveillance’: a court had been satisfied on the evidence presented that an order was necessary to protect the public from sexual harm from him. He breached that order by insinuating himself into the life of a 6-year-old girl and was found alone with her. He was accurately matched with the watchlist image. The third feature is that the technology did its job. It would be easy to celebrate this as a case of ‘thank goodness nothing happened’ but that would underestimate its significance and miss the legal areas where FRT will be challenged. 


IT leaders’ top 5 barriers to AI success

Data quality issues are a real concern and an actual barrier to AI adoption, but the problem is much larger than the traditional and typical discussion about data quality in transactional or analytical environments, says John Thompson, senior vice president and principal at AI consulting firm The Hackett Group. “With gen AI, literally 100% of an organization’s data, documents, videos, policies, procedures, and more are available for active use,” Thompson says. This is a much larger issue than data quality in systems such as enterprise resource planning (ERP) or customer relationship management (CRM), he says. ... Organizations need the infrastructure in place to educate and train its employees to understand the capabilities and limitations of AI, Ally’s Muthukrishnan says. “Without the right training, adoption and utilization will not achieve the outcome you’re hoping for,” he adds. “While I believe AI is one of the largest tech transformations of our lifetime, integrating it into day-to-day processes is a huge change management undertaking.” ... “The skills gap is only going to grow,” Hackett Group’s Thompson says. “Now is the time to start. You can start with your team. Have them work on test cases. Have them work on personal projects. Have them work on passion projects. [Taking] time for everyone to take a class is just elongating the process to close the skills gap. ...”


Google’s Cloud IDP Could Replace Platform Engineering

Much of the work behind the Google Cloud IDP comes from Anna Berenberg, an engineering fellow with Google Cloud who has been with the company for 19 years. “She is the originator of a lot of these concepts overall … many of these ideas which I did not really understand the impact of until I saw it manifest itself,” said Seroter. “She had this vision that I did not even buy into three years ago. She saw a little further ahead from there, and she has built and published things. It is impressive to have such interesting engineering thought leadership, not just applied to how Google does platforms, but now turning that into how we can change … infrastructure to make it simpler. She is a pioneer of that.” In an interview with The New Stack, Berenberg said that her ideas on the IDP came to her when she looked at how this could all work using Google’s vast compute and services resources to reimagine how platform engineering could be improved. “The way it works is you have a cloud platform, and then on top of it is this thick layer of platform engineering stuff, right?” said Berenberg. “So, platform engineering teams are building a layer on top of infrastructure cloud to do an abstraction and workflows and whatever they need” to improve processes for developers. “It shrinks down because everything shifts down to the platform and now we are providing platform engineering. “


FakeCaptcha Infrastructure HelloTDS Infects Millions of Devices With Malware

The campaign’s cunning blend of social engineering and technical subterfuge has enabled threat actors to compromise systems across a vast array of regions, targeting unsuspecting users as they consume streaming media, download shared files, or even browse legitimate-appearing websites. Gendigital researchers first identified HelloTDS as an intricate Traffic Direction System (TDS) — a malicious decision engine that leverages device and network fingerprinting to select which visitors receive harmful payloads, ranging from infostealers like LummaC2 to fraudulent browser updates and tech support scams. Entry points for the menace include compromised or attacker-operated file-sharing portals, streaming sites, pornographic platforms, and even malvertising embedded in seemingly innocuous ad spots. The system’s filtering and redirection logic allows it to avoid obvious honeytraps such as virtual machines, VPNs, or known analyst environments, significantly complicating detection and takedown efforts. The scale of the campaign is staggering. Gen’s telemetry reported over 4.3 million attempted infections within just two months, with the highest impact in the United States, Brazil, India, Western Europe, and, proportionally, several Balkan and African countries.


Cutting-Edge ClickFix Tactics Snowball, Pushing Phishing Forward

ClickFix first came to light as an attack method last year when Proofpoint researchers observed compromised websites serving overlay error messages to visitors. The message claimed that a faulty browser update was causing problems, and asked the victim to open "Windows PowerShell (Admin)" (which will open a User Account Control (UAC) prompt) and then right-click to paste code that supposedly "fixed" the problem — hence the attack name. Instead of a fix, though, users were unwittingly installing malware — in that case, it was the Vidar stealer. ... "The goals of ClickFix campaigns vary depending on the attacker," says Nathaniel Jones, vice president of security and AI strategy at Darktrace. "The aim might be to infect as many systems as possible to build out a network of proxies to use later. Some attackers are trying to exfiltrate credentials or domain controller files and then sell to other threat actors for initial access. So there isn't one type of victim or one objective — the tactic is flexible and being used in different ways." ... The approach, and ClickFix in general, represents a significant innovation in the world of phishing, according to Jones, because unlike an email asking someone to click on a typosquatted link that can be easily checked, the entire attack takes place inside the browser.


Like humans, AI is forcing institutions to rethink their purpose

The institutions in place now were not designed for this moment. Most were forged in the Industrial Age and refined during the Digital Revolution. Their operating models reflect the logic of earlier cognitive regimes: stable processes, centralized expertise and the tacit assumption that human intelligence would remain preeminent. ... But the assumptions beneath these structures are under strain. AI systems now perform tasks once reserved for knowledge workers, including summarizing documents, analyzing data, writing legal briefs, performing research, creating lesson plans and teaching, coding applications and building and executing marketing campaigns. Beyond automation, a deeper disruption is underway: The people running these institutions are expected to defend their continued relevance in a world where knowledge itself is no longer as highly valued or even a uniquely human asset. ... This does not mean institutional collapse is inevitable. But it does suggest that the current paradigm of stable, slow-moving and authority-based structures may not endure. At a minimum, institutions are under intense pressure to change. If institutions are to remain relevant and play a vital role in the age of AI, they must become more adaptive, transparent and attuned to the values that cannot readily be encoded in algorithms: human dignity, ethical deliberation and long-term stewardship.

Daily Tech Digest - May 29, 2025


Quote for the day:

"All progress takes place outside the comfort zone." -- Michael John Bobak


What Are Deepfakes? Everything to Know About These AI Image and Video Forgeries

Deepfakes rely on deep learning, a branch of AI that mimics how humans recognize patterns. These AI models analyze thousands of images and videos of a person, learning their facial expressions, movements and voice patterns. Then, using generative adversarial networks, AI creates a realistic simulation of that person in new content. GANs are made up of two neural networks where one creates content (the generator), and the other tries to spot if it's fake (the discriminator). The number of images or frames needed to create a convincing deepfake depends on the quality and length of the final output. For a single deepfake image, as few as five to 10 clear photos of the person's face may be enough. ... While tech-savvy people might be more vigilant about spotting deepfakes, regular folks need to be more cautious. I asked John Sohrawardi, a computing and information sciences Ph.D. student leading the DeFake Project, about common ways to recognize a deepfake. He advised people to look at the mouth to see if the teeth are garbled." Is the video more blurry around the mouth? Does it feel like they're talking about something very exciting but act monotonous? That's one of the giveaways of more lazy deepfakes." ... "Too often, the focus is on how to protect yourself, but we need to shift the conversation to the responsibility of those who create and distribute harmful content," Dorota Mani tells CNET


Beyond GenAI: Why Agentic AI Was the Real Conversation at RSA 2025

Contrary to GenAI, that primarily focuses on the divergence of information, generating new content based on specific instructions, SynthAI developments emphasize the convergence of information, presenting less but more pertinent content by synthesizing available data. SynthAI will enhance the quality and speed of decision-making, potentially making decisions autonomously. The most evident application lies in summarizing large volumes of information that humans would be unable to thoroughly examine and comprehend independently. SynthAI’s true value will be in aiding humans to make more informed decisions efficiently. ... Trust in AI also needs to evolve. This isn’t a surprise as AI, like all technologies, is going through the hype cycle and in the same way that cloud and automation suffered with issues around trust in the early stages of maturity, so AI is following a very similar pattern. It will be some time before trust and confidence are in balance with AI. ... Agentic AI encompasses tools that can understand objectives, make decisions, and act. These tools streamline processes, automate tasks, and provide intelligent insights to aid in quick decision making. In a use case involving repetitive processes, take a call center as an example, agentic AI can have significant value. 


The Privacy Challenges of Emerging Personalized AI Services

The nature of the search business will change substantially in this world of personalized AI services. It will evolve from a service for end users to an input into an AI service for end users. In particular, search will become a component of chatbots and AI agents, rather than the stand-alone service it is today. This merger has already happened to some degree. OpenAI has offered a search service as part of its ChatGPT deployment since last October. Google launched AI Overview in May of last year. AI Overview returns a summary of its search results generated by Google’s Gemini AI model at the top of its search results. When a user asks a question to ChatGPT, the chatbot will sometimes search the internet and provide a summary of its search results in its answer. ... The best way forward would not be to invent a sector-specific privacy regime for AI services, although this could be made to work in the same way that the US has chosen to put financial, educational, and health information under the control of dedicated industry privacy regulators. It might be a good approach if policymakers were also willing to establish a digital regulator for advanced AI chatbots and AI agents, which will be at the heart of an emerging AI services industry. But that prospect seems remote in today’s political climate, which seems to prioritize untrammeled innovation over protective regulation.


What CISOs can learn from the frontlines of fintech cybersecurity

For Shetty, the idea that innovation competes with security is a false choice. “They go hand in hand,” she says. User trust is central to her approach. “That’s the most valuable currency,” she explains. Lose it, and it’s hard to get back. That’s why transparency, privacy, and security are built into every step of her team’s work, not added at the end. ... Supply chain attacks remain one of her biggest concerns. Many organizations still assume they’re too small to be a target. That’s a dangerous mindset. Shetty points to many recent examples where attackers reached big companies by going through smaller suppliers. “It’s not enough to monitor your vendors. You also have to hold them accountable,” she says. Her team helps clients assess vendor cyber hygiene and risk scores, and encourages them to consider that when choosing suppliers. “It’s about making smart choices early, not reacting after the fact.” Vendor security needs to be an active process. Static questionnaires and one-off audits are not enough. “You need continuous monitoring. Your supply chain isn’t standing still, and neither are attackers.” ... The speed of change is what worries her most. Threats evolve quickly. The amount of data to protect grows every day. At the same time, regulators and customers expect high standards, and they should.


Tech optimism collides with public skepticism over FRT, AI in policing

Despite the growing alarm, some tech executives like OpenAI’s Sam Altman have recently reversed course, downplaying the need for regulation after previously warning of AI’s risks. This inconsistency, coupled with massive federal contracts and opaque deployment practices, erodes public trust in both corporate actors and government regulators. What’s striking is how bipartisan the concern has become. According to the Pew survey, only 17 percent of Americans believe AI will have a positive impact on the U.S. over the next two decades, while 51 percent express more concern than excitement about its expanding role. These numbers represent a significant shift from earlier years and a rare area of consensus between liberal and conservative constituencies. ... Bias in law enforcement AI systems is not simply a product of technical error; it reflects systemic underrepresentation and skewed priorities in AI design. According to the Pew survey, only 44 percent of AI experts believe women’s perspectives are adequately accounted for in AI development. The numbers drop even further for racial and ethnic minorities. Just 27 percent and 25 percent say the perspectives of Black and Hispanic communities, respectively, are well represented in AI systems.


6 rising malware trends every security pro should know

Infostealers steal browser cookies, VPN credentials, MFA (multi-factor authentication) tokens, crypto wallet data, and more. Cybercriminals sell the data that infostealers grab through dark web markets, giving attackers easy access to corporate systems. “This shift commoditizes initial access, enabling nation-state goals through simple transactions rather than complex attacks,” says Ben McCarthy, lead cyber security engineer at Immersive. ... Threat actors are systematically compromising the software supply chain by embedding malicious code within legitimate development tools, libraries, and frameworks that organizations use to build applications. “These supply chain attacks exploit the trust between developers and package repositories,” Immersive’s McCarthy tells CSO. “Malicious packages often mimic legitimate ones while running harmful code, evading standard code reviews.” ... “There’s been a notable uptick in the use of cloud-based services and remote management platforms as part of ransomware toolchains,” says Jamie Moles, senior technical marketing manager at network detection and response provider ExtraHop. “This aligns with a broader trend: Rather than relying solely on traditional malware payloads, adversaries are increasingly shifting toward abusing trusted platforms and ‘living-off-the-land’ techniques.”


How Constructive Criticism Can Improve IT Team Performance

Constructive criticism can be an excellent instrument for growth, both individually and on the team level, says Edward Tian, CEO of AI detection service provider GPTZero. "Many times, and with IT teams in particular, work is very independent," he observes in an email interview. "IT workers may not frequently collaborate with one another or get input on what they're doing," Tian states. ... When using constructive criticism, take an approach that focuses on seeking improvement with the poor result, Chowning advises. Meanwhile, use empathy to solicit ideas on how to improve on a poor result. She adds that it's important to ask questions, listen, seek to understand, acknowledge any difficulties or constraints, and solicit improvement ideas. ... With any IT team there are two key aspects of constructive criticism: creating the expectation and opportunity for performance improvement, and -- often overlooked -- instilling recognition in the team that performance is monitored and has implications, Chowning says. ... The biggest mistake IT leaders make is treating feedback as a one-way directive rather than a dynamic conversation, Avelange observes. "Too many IT leaders still operate in a command-and-control mindset, dictating what needs to change rather than co-creating solutions with their teams."


How AI will transform your Windows web browser

Google isn’t the only one sticking AI everywhere imaginable, of course. Microsoft Edge already has plenty of AI integration — including a Copilot icon on the toolbar. Click that, and you’ll get a Copilot sidebar where you can talk about the current web page. But the integration runs deeper than most people think, with more coming yet: Copilot in Edge now has access to Copilot Vision, which means you can share your current web view with the AI model and chat about what you see with your voice. This is already here — today. Following Microsoft’s Build 2025 developers’ conference, the company is starting to test a Copilot box right on Edge’s New Tab page. Rather than a traditional Bing search box in that area, you’ll soon see a Copilot prompt box so you can ask a question or perform a search with Copilot — not Bing. It looks like Microsoft is calling this “Copilot Mode” for Edge. And it’s not just a transformed New Tab page complete with suggested prompts and a Copilot box, either: Microsoft is also experimenting with “Context Clues,” which will let Copilot take into account your browser history and preferences when answering questions. It’s worth noting that Copilot Mode is an optional and experimental feature. ... Even the less AI-obsessed browsers of Mozilla Firefox and Brave are now quietly embracing AI in an interesting way.


No, MCP Hasn’t Killed RAG — in Fact, They’re Complementary

Just as agentic systems are all the rage this year, so is MCP. But MCP is sometimes talked about as if it’s a replacement for RAG. So let’s review the definitions. In his “Is RAG dead yet?” post, Kiela defined RAG as follows: “In simple terms, RAG extends a language model’s knowledge base by retrieving relevant information from data sources that a language model was not trained on and injecting it into the model’s context.” As for MCP (and the middle letter stands for “context”), according to Anthropic’s documentation, it “provides a standardized way to connect AI models to different data sources and tools.” That’s the same definition, isn’t it? Not according to Kiela. In his post, he argued that MCP complements RAG and other AI tools: “MCP simplifies agent integrations with RAG systems (and other tools).” In our conversation, Kiela added further (ahem) context. He explained that MCP is a communication protocol — akin to REST or SOAP for APIs — based on JSON-RPC. It enables different components, like a retriever and a generator, to speak the same language. MCP doesn’t perform retrieval itself, he noted, it’s just the channel through which components interact. “So I would say that if you have a vector database and then you make that available through MCP, and then you let the language model use it through MCP, that is RAG,” he continued.


AI didn’t kill Stack Overflow

Stack Overflow’s most revolutionary aspect was its reputation system. That is what elevated it above the crowd. The brilliance of the rep game allowed Stack Overflow to absorb all the other user-driven sites for developers and more or less kill them off. On Stack Overflow, users earned reputation points and badges for asking good questions and providing helpful answers. In the beginning, what was considered a good question or answer was not predetermined; it was a natural byproduct of actual programmers upvoting some exchanges and not others. ... For Stack Overflow, the new model, along with highly subjective ideas of “quality” opened the gates to a kind of Stanford Prison Experiment. Rather than encouraging a wide range of interactions and behaviors, moderators earned reputation by culling interactions they deemed irrelevant. Suddenly, Stack Overflow wasn’t a place to go and feel like you were part of a long-lived developer culture. Instead, it became an arena where you had to prove yourself over and over again. ... Whether the culture of helping each other will survive in this new age of LLMs is a real question. Is human helping still necessary? Or can it all be reduced to inputs and outputs? Maybe there’s a new role for humans in generating accurate data that feeds the LLMs. Maybe we’ll evolve into gardeners of these vast new tracts of synthetic data.