Showing posts with label Phishing. Show all posts
Showing posts with label Phishing. Show all posts

Daily Tech Digest - September 11, 2025


Quote for the day:

"You live longer once you realize that any time spent being unhappy is wasted." -- Ruth E. Renkl



Six hard truths for software development bosses

Everyone behaves differently when the boss is around. Everyone. And you, as a boss, need to realize this. There are two things to realize here. Firstly, when you are present, people will change who they are and what they say. Secondly, you should consider that fact when deciding whether to be in the room. ... Bosses need to realize that what they say, even comments that you might think are flippant and not meant to be taken seriously, will be taken seriously. ... The other side of that coin is that your silence and non-action can have profound effects. Maybe you space out in a meeting and miss a question. The team might think you blew them off and left the great idea hanging. Maybe you forgot to answer an email. Maybe you had bigger fish to fry and you were a bit short and dismissive of an approach by a direct report. Small lapses can be easily misconstrued by your team. ... You are the boss. You have the power to promote, demote, and award raises and bonuses. These powers are important, and people will see you in that light. Even your best attempts at being cordial, friendly, and collegial will not overcome the slight apprehension your authority will engender. Your mood on any given day will be noticed and tracked. ... You can and should have input into technical decisions and design decisions, but your team will want to be the ones driving what direction things take and how things get done. 


AI prompt injection gets real — with macros the latest hidden threat

“Broadly speaking, this threat vector — ‘malicious prompts embedded in macros’ — is yet another prompt injection method,” Roberto Enea, lead data scientist at cybersecurity services firm Fortra, told CSO. “In this specific case, the injection is done inside document macros or VBA [Visual Basic for Applications] scripts and is aimed at AI systems that analyze files.” Enea added: “Typically, the end goal is to mislead the AI system into classifying malware as safe.” ... “Attackers could embed hidden instructions in common business files like emails or Word documents, and when Copilot processed the file, it executed those instructions automatically,” Quentin Rhoads-Herrera, VP of cybersecurity services at Stratascale, explained. In response to the vulnerability, Microsoft recommended patching, restricting Copilot access, stripping hidden metadata from shared files, and enabling its built-in AI security controls. ... “We’ve already seen proof-of-concept attacks where malicious prompts are hidden inside documents, macros, or configuration files to trick AI systems into exfiltrating data or executing unintended actions,” Stratascale’s Rhoads-Herrera commented. “Researchers have also demonstrated how LLMs can be misled through hidden instructions in code comments or metadata, showing the same principle at work.” Rhoads-Herrera added: “While some of these remain research-driven, the techniques are quickly moving into the hands of attackers who are skilled at weaponizing proof-of-concepts.”


Are you really ready for AI? Exposing shadow tools in your organisation

When an organisation doesn’t regulate an approved framework of AI tools in place, its employees will commonly turn to using these applications across everyday actions. By now, everyone is aware of the existence of generative AI assets, whether they are actively using them or not, but without a proper ruleset in place, everyday employee actions can quickly become security nightmares. This can be everything from employees pasting sensitive client information or proprietary code into public generative AI tools to developers downloading promising open-source models from unverified repositories. ... The root cause of turning to shadow AI isn’t malicious intent. Unlike cyber actors, aiming to disrupt and exploit business infrastructure weaknesses for a hefty payout, employees aren’t leaking data outside of your organisation intentionally. AI is simply an accessible, powerful tool that many find exciting. In the absence of clear policies, training and oversight, and the increased pressure of faster, greater delivery, people will naturally seek the most effective support to get the job done. ... Regardless, you cannot protect against what you can’t see. Tools like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB), which detect unauthorised AI use, must be an essential part of your security monitoring toolkit. Ensuring these alerts connect directly to your SIEM and defining clear processes for escalation and correction are also key for maximum security.


How to error-proof your team’s emergency communications

Hierarchy paralysis occurs when critical information is withheld by junior staff due to the belief that speaking up may undermine the chain of command. Junior operators may notice an anomaly or suspect a procedure is incorrect, but often neglect to disclose their concerns until after a mistake has happened. They may assume their input will be dismissed or even met with backlash due to their position. In many cases, their default stance is to believe that senior staff are acting on insight that they themselves lack. CRM trains employees to follow a structured verbal escalation path during critical incidents. Similar to emergency operations procedures (EOPs), staff are taught to express their concerns using short, direct phrases. This approach helps newer employees focus on the issue itself rather than navigating the interaction’s social aspects — an area that can lead to cognitive overload or delayed action. In such scenarios, CRM recommends the “2-challenge rule”: team members should attempt to communicate an observed issue twice, and if the issue remains unaddressed, escalate it to upper management. ... Strengthening emergency protocols can help eliminate miscommunication between employees and departments. Owners and operators can adopt strategies from other mission-critical industries to reduce human error and improve team responsiveness. While interpersonal issues between departments and individuals in different roles are inevitable, tighter emergency procedures can ensure consistency and more predictable team behavior.


SpamGPT – AI-powered Attack Tool Used By Hackers For Massive Phishing Attack

SpamGPT’s dark-themed user interface provides a comprehensive dashboard for managing criminal campaigns. It includes modules for setting up SMTP/IMAP servers, testing email deliverability, and analyzing campaign results features typically found in Fortune 500 marketing tools but repurposed for cybercrime. The platform gives attackers real-time, agentless monitoring dashboards that provide immediate feedback on email delivery and engagement. ... Attackers no longer need strong writing skills; they can simply prompt the AI to create scam templates for them. The toolkit’s emphasis on scale is equally concerning, as it promises guaranteed inbox delivery to popular providers like Gmail, Outlook, and Microsoft 365 by abusing trusted cloud services such as Amazon AWS and SendGrid to mask its malicious traffic. ... What once required significant technical expertise can now be executed by a single operator with a ready-made toolkit. The rise of such AI-driven platforms signals a new evolution in cybercrime, where automation and intelligent content generation make attacks more scalable, convincing, and difficult to detect. To counter this emerging threat, organizations must harden their email defenses. Enforcing strong email authentication protocols such as DMARC, SPF, and DKIM is a critical first step to make domain spoofing more difficult. Furthermore, enterprises should deploy AI-powered email security solutions capable of detecting the subtle linguistic patterns and technical signatures of AI-generated phishing content.


How attackers weaponize communications networks

The most attractive targets for advanced threat actors are not endpoint devices or individual servers, but the foundational communications networks that connect everything. This includes telecommunications providers, ISPs, and the routing infrastructure that forms the internet’s backbone. These networks are a “target-rich environment” because compromising a single point of entry can grant access to a vast amount of data from a multitude of downstream targets. The primary motivation is overwhelmingly geopolitical. We’re seeing a trend of nation-state actors, such as those behind the Salt Typhoon campaign, moving beyond corporate espionage to a more strategic, long-term intelligence-gathering mission. ... Two recent trends are particularly telling and serve as major warning signs. The first is the sheer scale and persistence of these attacks. ... The second trend is the fusion of technical exploits with AI-powered social engineering. ... A key challenge is the lack of a standardized global approach. Differing regulations around data retention, privacy, and incident reporting can create a patchwork of security requirements that threat actors can easily exploit. For a global espionage campaign, a weak link in one country’s regulatory framework can compromise an entire international communications chain. The goal of international policy should be to establish a baseline of security that includes mandatory incident reporting, a unified approach to patching known vulnerabilities, and a focus on building a collective defense.


AI's free web scraping days may be over, thanks to this new licensing protocol

AI companies are capturing as much content as possible from websites while also extracting information. Now, several heavyweight publishers and tech companies -- Reddit, Yahoo, People, O'Reilly Media, Medium, and Ziff Davis (ZDNET's parent company) -- have developed a response: the Really Simple Licensing (RSL) standard. You can think of RSL as Really Simple Syndication's (RSS) younger, tougher brother. While RSS is about syndication, getting your words, stories, and videos out onto the wider web, RSL says: "If you're an AI crawler gobbling up my content, you don't just get to eat for free anymore." The idea behind RSL is brutally simple. Instead of the old robots.txt file -- which only said, "yes, you can crawl me," or "no, you can't," and which AI companies often ignore -- publishers can now add something new: machine-readable licensing terms. Want an attribution? You can demand it. Want payment every time an AI crawler ingests your work, or even every time it spits out an answer powered by your article? Yep, there's a tag for that too. ... It's a clever fix for a complex problem. As Tim O'Reilly, the O'Reilly Media CEO and one of the RSL initiative's high-profile backers, said: "RSS was critical to the internet's evolution…but today, as AI systems absorb and repurpose that same content without permission or compensation, the rules need to evolve. RSL is that evolution."


AI is changing the game for global trade: Nagendra Bandaru, Wipro

AI is revolutionising global supply chain and trade management by enabling businesses across industries to make real-time, intelligent decisions. This transformative shift is driven by the deployment of AI agents, which dynamically respond to changing tariff regimes, logistics constraints, and demand fluctuations. Moving beyond traditional static models, AI agents are helping create more adaptive and responsive supply chains. ... The strategic focus is also evolving. While cost optimisation remains important, AI is now being leveraged to de-risk operations, anticipate geopolitical disruptions, and ensure continuity. In essence, agentic AI is reshaping supply chains into predictive, adaptive ecosystems that align more closely with the complexities of global trade. ... The next frontier is going to be threefold: first, the rise of agentic AI at scale marks a shift from isolated use cases to enterprise-wide deployment of autonomous agents capable of managing end-to-end trade ecosystems; second, the development of sovereign and domain-specific language models is enabling lightweight, highly contextualised solutions that uphold data sovereignty while delivering robust, enterprise-grade outcomes; and third, the convergence of AI with emerging technologies—including blockchain for provenance and quantum computing for optimisation—is poised to redefine global trade dynamics.


5 challenges every multicloud strategy must address

Transferring AI data among various cloud services and providers also adds complexity — but also significant risks. “Tackling software sprawl, especially as organizations accelerate their adoption of AI, is a top action for CIOs and CTOs,” says Mindy Lieberman, CIO at database platform provider MongoDB. ... A multicloud environment can complicate the management of data sovereignty. Companies need to ensure that data remains in line with the laws and regulations of the specific geographic regions where it is stored and processed. ... Deploying even one cloud service can present cybersecurity risks for an enterprise, so having a strong security program in place is all the more vital for a multicloud environment. The risks stem from expanded attack surfaces, inconsistent security practices among service providers, increased complexity of the IT infrastructure, fragmented visibility, and other factors. IT needs to be able to manage user access to cloud services and detect threats across multiple environments — in many cases without even having a full inventory of cloud services. ... “With greater complexity comes more potential avenues of failure, but also more opportunities for customization and optimization,” Wall says. “Each cloud provider offers unique strengths and weaknesses, which means forward-thinking enterprises must know how to leverage the right services at the right time.”


What Makes Small Businesses’ Data Valuable to Cybercriminals?

Small businesses face unique challenges that make them particularly vulnerable. They often lack dedicated IT or cybersecurity teams, sophisticated systems, and enterprise-grade protections. Budget constraints mean many cannot afford enterprise-level cybersecurity solutions, creating easily exploitable gaps. Common issues include outdated software, reduced security measures, and unpatched systems, which weaken defenses and provide easy entry points for criminals. A significant vulnerability is the lack of employee cybersecurity awareness. ... Small businesses, just like large organizations, collect and store vast amounts of valuable data. Customer data represents a goldmine for cybercriminals, including first and last names, home and email addresses, phone numbers, financial information, and even medical information. Financial records are equally attractive targets, including business financial information, payment details, and credit/debit card payment data. Intellectual property and trade secrets represent valuable proprietary assets that can be sold to competitors or used for corporate espionage. ... Small businesses are undeniably attractive targets for cybercriminals, not because they are financial giants, but because they are perceived as easier to breach due to resource constraints and common vulnerabilities. Their data, from customer PII to financial records and intellectual property, is highly valuable for resale, fraud, and as gateways to larger targets.

Daily Tech Digest - August 17, 2025


Quote for the day:

"Failure is the condiment that gives success its flavor." -- Truman Capote


The third leg of the stool: Technology’s role in M&A

The term “technical debt” wasn’t mainstream, making it tough to convey to lawyers, accountants and executives. Their languages aligned — business, finance, law — with shared specificity. But IT? We spoke a different dialect, full of jargon that obscured our business insights. This cultural divide explained technology’s historical exclusion from M&A. The gap was mine to bridge. Over time, I learned to translate, framing technical risks in terms of dollars, downtime and competitive edge. ... Overlap exists with legal and finance, but IT’s lens is unique: assessing how operations impact data and systems. Chaotic processes yield chaotic data; effective ones produce reliable insights. ... “Good decisions on bad data are bad decisions” (me, circa 2007). Data is an enterprise’s most valuable asset, yet often neglected. Poor data can cripple; great data accelerates growth. In M&A, I scrutinize quality, lifecycle management, governance, ownership and analysis. Companies are typically polarized: exemplary governance or barely functional. Data issues heavily influence deal pricing — more on that in a future post. ... Critical during M&A, as deals attract hackers — sometimes derailing them entirely. With AI-driven threats rising, robust postures are non-negotiable. This warrants its own article.


Navigating the issues that impact data center design today

In the last few years, design considerations have changed significantly. The adoption of high-performance computing (HPC) and artificial intelligence (AI) applications translates into greater power consumption and that requires a rethink of cooling and management. What’s more, it’s increasingly difficult to predict future capacity requirements. ... Modular data center infrastructure can help facilitate zone-based deployments. Many people think of modular data centers as those deployed in ISO shipping containers, but that is only one type. There are also skid-mounted systems and preconfigured enclosures. Preconfigured enclosures can be shells or self-contained units with built-in power, cooling, fire suppression, and physical security. ... Whether building out a new data center or expanding an existing one, organizations should choose sustainable materials. With smart choices, future data centers will be self-sufficient and carbon- and water-neutral and have minimal impact on the local environment.
Planning is key These challenges have upped the ante for data center design planning. It’s no longer advisable to build out a simple shell with a raised floor and start adding infrastructure. Your facility must have the necessary power capacity, redundancy, and security to meet your business needs. 


Mastering Microservices: Seven Uncommon Strategies for Streamlined Success

Containerization might seem like old news, but there are nuances that can significantly impact performance and scalability. Containers encapsulate your microservices, ensuring consistency across environments. Yet, not all container strategies are created equal. We’ve seen teams struggle when they cram too many processes into a single container. ... It’s said that you can’t manage what you can’t measure, and this couldn’t be truer for microservices. With multiple services running concurrently, effective logging and monitoring become crucial. Gone are the days of relying solely on traditional log files or single-instance monitors. We once faced a situation where a subtle bug in a service went undetected for weeks, causing memory leaks and gradually degrading performance. Our solution was to implement centralized logging and observability tools like Prometheus and Grafana. These tools allowed us to aggregate logs from various services and gain insights through real-time dashboards. ... Security is often like flossing—everyone knows it’s important, but many neglect it until there’s a problem. With microservices, security risks multiply. It’s crucial to secure inter-service communication, protect sensitive data, and ensure compliance with industry standards.


AI Security in the Cloud-Native DevSecOps Pipeline

Because reacting to threats is a lost cause when the attacks themselves are learning and adapting, a proactive stance is essential for survival. This is a mindset embraced by security leaders like Akash Agrawal, VP of DevOps & DevSecOps at LambdaTest, an AI-native software testing platform. He argues for a fundamental shift: “Security can no longer be bolted on at the end,” he explains. “AI allows us to move from reactive scanning to proactive prevention.” This approach means using AI not just to identify flaws in committed code, but to predict where the next one might emerge. ... But architectural flaws are not the only risk. AIʼs drive for automation can also lead to more common security gaps like credential leakage, a problem that Nic Adams, co-founder and CEO of security startup 0rcus, sees growing. He points to AI-backed CI/CD tools that auto-generate infrastructure-as-code and inadvertently create “credential sprawl” by embedding long-lived API keys directly into configuration files. The actionable defense here is to assume AI will make mistakes and build a safety net around it. Teams must integrate real-time secret scanning directly into the pipeline and enforce a strict policy of using ephemeral, short-lived credentials that expire automatically. Beyond specific code vulnerabilities, there is a more strategic gap that AI introduces into the development process itself. 


Stop using AI for these 9 work tasks - here's why

Every time you give the AI some information, ask yourself how you would feel if it were posted to the company's public blog or wound up on the front page of your industry's trade journal. This concern also includes information that might be subject to disclosure regulations, such as HIPAA for health information or GDPR for personal data for folks operating in the EU. Regardless of what the AI companies tell you, it's best to simply assume that everything you feed into an AI is now grist for the model-training mill. Anything you feed in could later wind up in a response to somebody's prompt, somewhere else. ... Contracts are designed to be detailed and specific agreements on how two parties will interact. They are considered governing documents, which means that writing a bad contract is like writing bad code. Baaad things will happen. Do not ask AIs for help with contracts. They will make errors and omissions. They will make stuff up. Worse, they will do so while sounding authoritative, so you're more likely to use their advice. ... But when it comes time to ask for real advice that you plan on considering as you make major decisions, just don't. Let's step away from the liability risk issues and focus on common sense. First, if you're using something like ChatGPT for real advice, you have to know what to ask. If you're not trained in these professions, you might not know.


The Evolution of the DBA—More Than Just a Keeper of Databases

Automation has dramatically changed database administration. Routine tasks—such as performance tuning, index management, and backup scheduling—are increasingly handled by AI-driven database tools. Solutions such as Oracle Autonomous Database, Db2 AI for SQL, and Microsoft Azure SQL’s Intelligent Query Processing promise self-optimizing, self-healing databases. While this might sound like a threat to DBAs, it’s actually an opportunity. Instead of focusing on routine maintenance, DBAs can now shift their efforts toward higher-value tasks including data architecture, governance, and security. ... Organizations are no longer tied to a single database platform. With multi-cloud and hybrid cloud strategies becoming the norm, DBAs must manage data across on-premises systems, cloud-native databases, and hybrid architectures. The days of being a single-platform DBA (e.g., only working with one DBMS) are coming to an end. Instead, cross-platform expertise is now a necessity. Knowing how to optimize for multiple platforms and database systems—for example, AWS RDS, Google Cloud Spanner, Azure SQL, and on-prem Db2, Oracle, and PostgreSQL—is more and more a core part of the DBA’s job description.  ... With the explosion of data regulations and industry-specific mandates, compliance has become a primary concern for DBAs. 


The global challenge of achieving cyber resilience

The barriers to effective cybersecurity include familiar suspects such as budgetary and resource limitations, the increasing complexity of modern systems and challenge of keeping up with rapidly evolving cyber threats. However, topping the list of challenges for many organisations is the ongoing shortage of cybersecurity skills. A recent Cybersecurity Workforce Study from ISC2 found that, although the size of the global cybersecurity workforce increased to 5.5 million workers in 2023 (a rise of 9% over a single year), so did the gap between supply and demand, which rose by 13% over the same period. Unfortunately, it’s more than just a numbers gap. The study also found that the skills gap is an even greater concern, with respondents saying the lack of necessary skills was a bigger factor making their organisations vulnerable. It’s clear the current approach is flawed. The grand plans that governments have for cybersecurity will require significant uplifts to security programs, including major improvements in developer upskilling, skills verification and guardrails for artificial intelligence tools. Organisations also need to modernise their approach by implementing pathways to upskilling that use deep data insights to provide the best possible skills verification. They need to manage and mitigate the inherent risks that developers with low security maturity bring to the table.


Social engineering becomes strategic threat as OT sector faces phishing, deepfakes, and AI deception risks

With the expanding IT/OT footprint, the attack surface is increasingly providing attackers additional opportunities to compromise targets by stealing credentials, impersonating trusted insiders, and moving laterally from one system to another inside the network. AI-driven phishing, voice cloning, and deepfake-enabled pretexting are lowering the barrier to entry, enabling cyber adversaries to deploy powerful tools that have the potential to erode the reliability of human judgment across critical infrastructure installations. Microsoft security researchers warn that a single compromise, say via a contractor’s infected laptop, can breach previously isolated OT systems, turning them into a breach gateway. While phishing and identity theft are now common access tools, the impact in OT environments is much worse. ... AI-driven deception is rapidly reshaping the social engineering landscape. Attackers are using voice cloning and deepfake technology to impersonate executives with unnerving accuracy. Qantas recently fell victim to a similar scheme, where an AI-powered ‘vishing’ attack compromised the personal data of up to six million customers. These incidents highlight how artificial intelligence has lowered the barrier for convincing, high-impact fraud. Across OT environments, such as energy distribution or manufacturing plants, the impact of social engineering goes way beyond stolen funds or data.


When cloud growth outpaces control, waste follows

Access to data does not guarantee accountability. Many organizations have detailed cost reporting but continue to struggle with cloud waste. The issue here shifts from one of visibility towards one of proximity. Our data shows 59% of organizations have a FinOps team that does some or all cloud cost optimization tasks, yet in many cases, these teams still sit at the edge of delivery. So, while they can surface issues, they are often too removed from daily operations to intervene effectively. The most effective models integrate cost ownership into delivery itself. This means that engineering leads, platform teams and product owners have oversight to take action before inefficiencies take hold. As a result, when these roles are supported with relevant reporting and shared financial metrics, cost awareness becomes a natural part of the decision-making process. This makes it easier to adjust workloads, retire underutilized services, and optimize environments in-flight, rather than in hindsight. ... Control is easiest to build before complexity sets in. The longer organizations delay embedding structure into cloud governance, the harder it becomes to retrofit later. Inconsistent tagging, ambiguous ownership and manual reporting all take time to correct once they are entrenched.


The Growing Impact of Technical Solution Architecture in Software Engineering

Technical solution architects serve as the bridge between business objectives and technology implementation. Their role involves understanding organizational needs, designing scalable system architectures, and leading development teams to execute complex solutions efficiently. As companies transition to cloud-native applications and AI-powered automation, technical solution architects must design systems that are adaptable, secure, and optimized for performance. ... “Legacy systems, while functional, often become bottlenecks as organizations grow,” Bodapati, who is also a fellow at the Hackathon Raptors, explains. “By modernizing these systems, we ensure better performance, stronger security, and more streamlined operations—all essential for today’s data-driven enterprises.” ... With experts like Rama Krishna Prasad Bodapati leading the charge in system architecture and software engineering, businesses can ensure scalability, agility, and efficiency in their IT infrastructure. His expertise in full-stack development, cloud engineering, and enterprise software modernization continues to shape the future of digital transformation. “The future of software engineering isn’t just about building applications—it’s about building intelligent, adaptable, and high-performance ecosystems that drive business success,” Bodapati emphasizes.

Daily Tech Digest - July 23, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson


AI in customer communication: the opportunities and risks SMBs can’t ignore

To build consumer trust, businesses must demonstrate that AI genuinely improves the customer experience, especially by enhancing the quality, relevance and reliability of communication. With concerns around data misuse and inaccuracy, businesses need to clearly explain how AI supports secure, accurate and personalized interactions, not just internally but in ways customers can understand and see. AI should be positioned as an enabler of human service, taking care of routine tasks so employees can focus on complex, sensitive or high-value customer needs. A key part of gaining long-term trust is transparency around data. Businesses must clearly communicate how customer information is handled securely and show that AI is being used responsibly and with care. This could include clearly labelling AI-generated communications such as emails or text messages, or proactively informing customers about what data is being used and for what purpose.  ... As conversations move beyond why AI should be used to how it must be used responsibly and effectively, companies have entered a make-or-break “audition phase” for AI. In customer communications, businesses can no longer afford to just talk about AI’s benefits, they must prove them by demonstrating how it enhances quality, security, and personalization.


The Expiring Trust Model: CISOs Must Rethink PKI in the Era of Short-Lived Certificates and Machine Identity

While the risk associated with certificates applies to all companies, it is a greater challenge for businesses operating in regulated sectors such as healthcare, where certificates must often be tied to national digital identity systems. In several countries, healthcare providers and services are now required to issue certificates bound to a National Health Identifier (NHI). These certificates are used for authentication, e-signature and encryption in health data exchanges and must adhere to complex issuance workflows, usage constraints and revocation processes mandated by government frameworks. Managing these certificates alongside public TLS certificates introduces operational complexity that few legacy PKI solutions were designed to handle in today’s dynamic and cloud-native environments. ... The urgency of this mandate is heightened by the impending cryptographic shift driven by the rise of quantum computing. Transitioning to post-quantum cryptography (PQC) will require organizations to implement new algorithms quickly and securely. Frequent certificate renewal cycles, which once seemed a burden, could now become a strategic advantage. When managed through automated and agile certificate lifecycle management, these renewals provide the flexibility to rapidly replace compromised keys, rotate certificate authorities or deploy quantum-safe algorithms as they become standardized.


The CISO code of conduct: Ditch the ego, lead for real

The problem doesn’t stop at vendor interactions. It shows up inside their teams, too. Many CISOs don’t build leadership pipelines; they build echo chambers. They hire people who won’t challenge them. They micromanage strategy. They hoard influence. And they act surprised when innovation dries up or when great people leave. As Jadee Hanson, CISO at Vanta, put it, “Ego builds walls. True leadership builds trust. The best CISOs know the difference.” That distinction matters, especially when your team’s success depends on your ability to listen, adapt, and share the stage. ... Security isn’t just a technical function anymore. It’s a leadership discipline. And that means we need more than frameworks and certifications; we need a shared understanding of how CISOs should show up. Internally, externally, in boardrooms, and in the broader community. That’s why I’m publishing this. Not because I have all the answers, but because the profession needs a new baseline. A new set of expectations. A standard we can hold ourselves, and each other, to. Not about compliance. About conduct. About how we lead. What follows is the CISO Code of Conduct. It’s not a checklist, but a mindset. If you recognize yourself in it, good. If you don’t, maybe it’s time to ask why. Either way, this is the bar. Let’s hold it. ... A lot of people in this space are trying to do the right thing. But there are also a lot of people hiding behind a title.


Phishing simulations: What works and what doesn’t

Researchers conducted a study on the real-world effectiveness of common phishing training methods. They found that the absolute difference in failure rates between trained and untrained users was small across various types of training content. However, we should take this with caution, as the study was conducted within a single healthcare organization and focused only on click rates as the measure of success or failure. It doesn’t capture the full picture. Matt Linton, Google’s security manager, said phishing tests are outdated and often cause more frustration among employees than actually improving their security habits. ... For any training program to work, you first need to understand your organization’s risk. Which employees are most at risk? What do they already know about phishing? Next, work closely with your IT or security teams to create phishing tests that match current threats. Tell employees what to expect. Explain why these tests matter and how they help stop problems. Don’t play the blame game. If someone fails a test, treat it as a chance to learn, not to punish. When you do this, employees are less likely to hide mistakes or avoid reporting phishing emails. When picking a vendor, focus on content and realistic simulations. The system should be easy to use and provide helpful reports.


Reclaiming Control: How Enterprises Can Fix Broken Security Operations

Asset management is critical to the success of the security operations function. In order to properly defend assets, I first and foremost need to know about them and be able to manage them. This includes applying policies, controls, and being able to identify assets and their locations when necessary, of course. With the move to hybrid and multi-cloud, asset management is much more difficult than it used to be. ... Visibility enables another key component of security operations – telemetry collection. Without the proper logging, eventing, and alerting, I can’t detect, investigate, analyze, respond to, and mitigate security incidents. Security operations simply cannot operate without telemetry, and the hybrid and multi-cloud world has made telemetry collection much more difficult than it used to be. ... If a security incident is serious enough, there will need to be a formal incident response. This will involve significant planning, coordination with a variety of stakeholders, regular communications, structured reporting, ongoing analysis, and a post-incident evaluation once the response is wrapped up. All of these steps are complicated by hybrid and multi-cloud environments, if not made impossible altogether. The security operations team will not be able to properly engage in incident response if they are lacking the above capabilities, and having a complex environment is not an excuse.


Legacy No More: How Generative AI is Powering the Next Wave of Application Modernization in India

Choosing the right approach to modernise your legacy systems is a task. Generative AI helps overcome the challenges faced in legacy systems and accelerates modernization. For example, it can be used to understand how legacy systems function through detailed business requirements. The resulting documents can be used to build new systems on the cloud in the second phase. This can make the process cheaper, too, and thus easier to get business cases approved. Additionally, generative AI can help create training documents for the current system if the organization wants to continue using its mainframes. In one example, generative AI might turn business models into microservices, API contracts, and database schemas ready for cloud-native inclusion. ... You need to have a holistic assessment of your existing system to implement generative AI effectively. Leaders must assess obsolete modules, interdependencies, data schemas, and throughput constraints to pinpoint high-impact targets and establish concrete modernization goals. Revamping legacy applications with generative AI starts with a clear understanding of the existing system. Organizations must conduct a thorough evaluation, mapping performance bottlenecks, obsolete modules, entanglements, and intricacies of the data flow, to create a modernization roadmap.


A Changing of the Guard in DevOps

Asimov, a newcomer in the space, is taking a novel approach — but addressing a challenge that’s as old as DevOps itself. According to the article, the team behind Asimov has zeroed in on a major time sink for developers: The cognitive load of understanding deployment environments and platform intricacies. ... What makes Asimov stand out is not just its AI capability but its user-centric focus. This isn’t another auto-coder. This is about easing the mental burden, helping engineers think less about YAML files and more about solving business problems. It’s a fresh coat of paint on a house we’ve been renovating for over a decade. ... Whether it’s a new player like Asimov or stalwarts like GitLab and Harness, the pattern is clear: AI is being applied to the same fundamental problems that have shaped DevOps from the beginning. The goals haven’t changed — faster cycles, fewer errors, happier teams — but the tools are evolving. Sure, there’s some real innovation here. Asimov’s knowledge-centric approach feels genuinely new. GitLab’s AI agents offer a logical evolution of their existing ecosystem. Harness’s plain-language chat interface lowers the barrier to entry. These aren’t just gimmicks. But the bigger story is the convergence. AI is no longer an outlier or an optional add-on — it’s becoming foundational. And as these solutions mature, we’re likely to see less hype and more impact.


Data Protection vs. Cyber Resilience: Mastering Both in a Complex IT Landscape

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural disasters are still necessary today, but companies must implement a more security-event-oriented approach on top of that. Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats as these approaches focus on infrastructure, neglecting application-level dependencies and validation processes. Further, threat actors have moved beyond interrupting services and now target data to poison, encrypt or exfiltrate it. ... Cyber resilience is now essential. With ransomware that can encrypt systems in minutes, the ability to recover quickly and effectively is a business imperative. Therefore, companies must develop an adaptive, layered strategy that evolves with emerging threats and aligns with their unique environment, infrastructure and risk tolerance. To effectively prepare for the next threat, technology leaders must balance technical sophistication with operational discipline as the best defence is not solely a hardened perimeter, it’s also having a recovery plan that works. Today, companies cannot afford to choose between data protection and cyber resilience, they must master both.


Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber

The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities. The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude. For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better. ... The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations. For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. 


How to Advance from SOC Manager to CISO?

Strategic thinking demands a firm grip on the organization's core operations, particularly how it generates revenue and its key value streams. This perspective allows security professionals to align their efforts with business objectives, rather than operating in isolation. ... This is related to strategic thinking but emphasizes knowledge of risk management and finance. Security leaders must factor in financial impacts to justify security investments and manage risks effectively. Balancing security measures with user experience and system availability is another critical aspect. If security policies are too strict, productivity can suffer; if they're too permissive, the company can be exposed to threats. ... Effective communication is vital for translating technical details into language senior stakeholders can grasp and act upon. This means avoiding jargon and abbreviations to convey information in a simplistic manner that resonates with multiple stakeholders, including executives who may not have a deep technical background. Communicating the impact of security initiatives in clear, concise language ensures decisions are well-informed and support company goals. ... You will have to ensure technical services meet business requirements, particularly in managing service delivery, implementing change, and resolving issues. All of this is essential for a secure and efficient IT infrastructure.

Daily Tech Digest - July 18, 2025


Quote for the day:

"It is during our darkest moments that we must focus to see the light." -- Aristotle Onassis




Machine unlearning gets a practical privacy upgrade

Machine unlearning, which refers to strategies for removing the influence of specific training data from a model, has emerged to fill the gap. But until now, most approaches have either been slow and costly or fast but lacking formal guarantees. A new framework called Efficient Unlearning with Privacy Guarantees (EUPG) tries to solve both problems at once. Developed by researchers at the Universitat Rovira i Virgili in Catalonia, EUPG offers a practical way to forget data in machine learning models with provable privacy protections and a lower computational cost. Rather than wait for a deletion request and then scramble to rework a model, EUPG starts by preparing the model for unlearning from the beginning. The idea is to first train on a version of the dataset that has been transformed using a formal privacy model, either k-anonymity or differential privacy. This “privacy-protected” model doesn’t memorize individual records, but still captures useful patterns. ... The researchers acknowledge that extending EUPG to large language models and other foundation models will require further work, especially given the scale of the data and the complexity of the architectures involved. They suggest that for such systems, it may be more practical to apply privacy models directly to the model parameters during training, rather than to the data beforehand.


Emerging Cloaking-as-a-Service Offerings are Changing Phishing Landscape

Cloaking-as-a-service offerings – increasingly powered by AI – are “quietly reshaping how phishing and fraud infrastructure operates, even if it hasn’t yet hit mainstream headlines,” SlashNext’s Research Team wrote Thursday. “In recent years, threat actors have begun leveraging the same advanced traffic-filtering tools once used in shady online advertising, using artificial intelligence and clever scripting to hide their malicious payloads from security scanners and show them only to intended victims.” ... The newer cloaking services offer advanced detection evasion techniques, such as JavaScript fingerprinting, device and network profiling, machine learning analysis and dynamic content swapping, and put them into user-friendly platforms that hackers and anyone else can subscribe to, SlashNext researchers wrote. “Cybercriminals are effectively treating their web infrastructure with the same sophistication as their malware or phishing emails, investing in AI-driven traffic filtering to protect their scams,” they wrote. “It’s an arms race where cloaking services help attackers control who sees what online, masking malicious activity and tailoring content per visitor in real time. This increases the effectiveness of phishing sites, fraudulent downloads, affiliate fraud schemes and spam campaigns, which can stay live longer and snare more victims before being detected.”


You’re Not Imagining It: AI Is Already Taking Tech Jobs

It’s difficult to pinpoint the exact motivation behind job cuts at any given company. The overall economic environment could also be a factor, marked by uncertainties heightened by President Donald Trump’s erratic tariff plans. Many companies also became bloated during the pandemic, and recent layoffs could still be trying to correct for overhiring. According to one report released earlier this month by the executive coaching firm Challenger, Gray and Christmas, AI may be more of a scapegoat than a true culprit for layoffs: Of more than 286,000 planned layoffs this year, only 20,000 were related to automation, and of those, only 75 were explicitly attributed to artificial intelligence, the firm found. Plus, it’s challenging to measure productivity gains caused by AI, said Stanford’s Chen, because while not every employee may have AI tools officially at their disposal at work, they do have unauthorized consumer versions that they may be using for their jobs. While the technology is beginning to take a toll on developers in the tech industry, it’s actually “modestly” created more demand for engineers outside of tech, said Chen. That’s because other sectors, like manufacturing, finance, and healthcare, are adopting AI tools for the first time, so they are adding engineers to their ranks in larger numbers than before, according to her research.


The architecture of culture: People strategy in the hospitality industry

Rewards and recognitions are the visible tip of the iceberg, but culture sits below the surface. And if there’s one thing that I’ve learned over the years, it’s that culture only sticks when it’s felt, not just said. Not once a year, but every single day. Hilton’s consistent recognition as a Great Place to Work® globally and in India stems from our unwavering support and commitment to helping people thrive, both personally and professionally. ... What has sustained our culture through this growth is a focus on the everyday. It is not big initiatives alone that shape how people feel at work, but the smaller, consistent actions that build trust over time. Whether it is how a team huddle is run, how feedback is received, or how farewells are handled, we treat each moment as an opportunity to reinforce care and connection. ... Equally vital is cultivating culturally agile, people-first leaders. South Asia’s diversity, across language, faith, generation, and socio-economic background, demands leadership that is both empathetic and inclusive. We’re working to embed this cultural intelligence across the employee journey, from hiring and onboarding to ongoing development and performance conversations, so that every team member feels genuinely seen and supported.


Capturing carbon - Is DAC a perfect match for data centers?

The commercialization of DAC, however, faces several significant challenges. One primary obstacle is navigating different compliance requirements across jurisdictions. Certification standards vary significantly between regions like Canada, the UK, and Europe, necessitating differing approaches in each jurisdiction. However, while requiring adjustments, Chadwick argues that these differences are not insurmountable and are merely part of the scaling process. Beyond regulatory and deployment concerns, achieving cost reductions is a significant challenge. DAC remains highly expensive, costing an average of $680 per ton to produce in 2024, according to Supercritical, a carbon removal marketplace. In comparison, Biochar has an average price of $165 per ton, and enhanced rock weathering has an average price of $310 per ton. In addition, the complexity of DAC means up-front costs are much higher than those of alternative forms of carbon removal. An average DAC unit comprises air-intake manifolds, absorption and desorption towers, liquid-handling tanks, and bespoke site-specific engineering. DAC also requires significant amounts of power to operate. Recent studies have shown that the energy consumption of fans in DAC plants can range from 300 to 900 kWh per ton of CO2 captures, which represents between 20 - 40 percent of total DAC system energy usage.


Rethinking Risk: The Role of Selective Retrieval in Data Lake Strategies

Selective retrieval works because it bridges the gap between data engineering complexity and security usability. It gives teams options without asking them to reinvent the wheel. It also avoids the need to bring in external tools during a breach investigation, which can introduce latency, complexity, or worse, gaps in the chain of custody. What’s compelling about this approach is that it doesn’t require businesses to abandon existing tools or re-architect their infrastructure. ... This model is especially relevant for mid-size IT teams who want to cover their audit requirements, but don’t have a 24/7 security operations center. It’s also useful in regulated sectors such as healthcare, financial services, and manufacturing where data retention isn’t optional, but real-time analysis for everything isn’t practical. ... Data volumes are continuing to rise. As organizations face high costs and fatigue, those that thrive will be the ones that treat storage and retrieval as distinct functions. The ability to preserve signal without incurring ongoing noise costs will become a critical enabler for everything from insider threat detection to regulatory compliance. Selective retrieval isn’t just about saving money. It’s about regaining control over data sprawl, aligning IT resources with actual risk, and giving teams the tools they need to ask, and answer, better questions.


Manufactured Madness: How To Protect Yourself From Insane AIs

The core of the problem lies in a well-intentioned but flawed premise: that we can and should micromanage an AI’s output to prevent any undesirable outcomes. These “guardrails” are complex sets of rules and filters designed to stop the model from generating hateful, biased, dangerous, or factually incorrect information. In theory, this is a laudable goal. In practice, it has created a generation of AIs that prioritize avoiding offense over providing truth. ... Compounding the problem of forced outcomes is a crisis of quality. The data these models are trained on is becoming increasingly polluted. In the early days, models were trained on a vast, curated slice of the pre-AI internet. But now, as AI-generated content inundates every corner of the web, new models are being trained on the output of their predecessors. ... Given this landscape, the burden of intellectual safety now falls squarely on the user. We can no longer afford to treat AI-generated text with passive acceptance. We must become active, critical consumers of its output. Protecting yourself requires a new kind of digital literacy. First and foremost: Trust, but verify. Always. Never take a factual claim from an AI at face value. Whether it’s a historical date, a scientific fact, a legal citation, or a news summary, treat it as an unconfirmed rumor until you have checked it against a primary source.


6 Key Lessons for Businesses that Collect and Use Consumer Data

Ensure your privacy notice properly discloses consumer rights, including the right to access, correct, and delete personal data stored and collected by businesses, and the right to opt-out of the sale of personal data and targeted advertising. Mechanisms for exercising those rights must work properly, with a process in place to ensure a timely response to consumer requests. ... Another issue that the Connecticut AG raised was that the privacy notice was “largely unreadable.” While privacy notices address legal rights and obligations, you should avoid using excessive legal jargon to the extent possible and use clear, simple language to notify consumers about their rights and the mechanisms for exercising those rights. In addition, be as succinct as possible to help consumers locate the information they need to understand and exercise applicable rights. ... The AG provided guidance that under the CTDPA, if a business uses cookie banners to permit a consumer to opt-out of some data processing, such as targeted advertising, the consumer must be provided with a symmetrical choice. In other words, it has to be as clear and as easy for the consumer to opt out of such use of their personal data as it would be to opt in. This includes making the options to accept all cookies and to reject all cookies visible on the screen at the same time and in the same color, font, and size.


How agentic AI Is reshaping execution across BFSI

Several BFSI firms are already deploying agentic models within targeted areas of their operations. The results are visible in micro-interventions that improve process flow and reduce manual load. Autonomous financial advisors, powered by agentic logic, are now capable of not just reacting to user input, but proactively monitoring markets, assessing customer portfolios, and recommending real-time changes.. In parallel, agentic systems are transforming customer service by acting as intelligent finance assistants, guiding users through complex processes such as mortgage applications or claims filing. ... For Agentic AI to succeed, it must be integrated into operational strategy. This begins by identifying workflows where progress depends on repetitive human actions that follow predictable logic. These are often approval chains, verifications, task handoffs, and follow-ups. Once identified, clear rules need to be defined. What conditions trigger an action? When is escalation required? What qualifies as a closed loop? The strength of an agentic system lies in its ability to act with precision, but that depends on well-designed logic and relevant signals. Data access is equally important. Agentic AI systems require context. That means drawing from activity history, behavioural cues, workflow states and timing patterns. 


Open Source Is Too Important To Dilute

The unfortunate truth is that these criteria don’t apply in every use case. We’ve seen vendors build traction with a truly open project. Then, worried about monetization or competition, they relicense it under a “source-available” model with restrictions, like “no commercial use” or “only if you’re not a competitor.” But that’s not how open source works. Software today is deeply interconnected. Every project — no matter how small or isolated — relies on dependencies, which rely on other dependencies, all the way down the chain. A license that restricts one link in that chain can break the whole thing. ... Forks are how the OSS community defends itself. When HashiCorp relicensed Terraform under the Business Source License (BSL) — blocking competitors from building on the tooling — the community launched OpenTofu, a fork under an OSI-approved license, backed by major contributors and vendors. Redis’ transition away from Berkeley Software Distribution (BSD) to a proprietary license was a business decision. But it left a hole — and the community forked it. That fork became Valkey, a continuation of the project stewarded by the people and platforms who relied on it most. ... The open source brand took decades to build. It’s one of the most successful, trusted ideas in software history. But it’s only trustworthy because it means something.

Daily Tech Digest - June 09, 2025


Quote for the day:

"Motivation gets you going and habit gets you there." -- Zig Ziglar


Architecting Human-AI Relationships: Governance Frameworks for Emotional AI Integration

The talent retention implications prove equally compelling, particularly as organizations compete for digitally native workforce demographics who view AI collaboration as a natural extension of professional relationships. ... Perhaps most significantly, healthy human-AI collaboration frameworks unleash innovation potential that traditional technology deployment approaches consistently fail to achieve. When teams feel psychologically safe in their AI partnerships—confident that transitions will be managed thoughtfully and that their emotional investment in digital collaborators is acknowledged and supported—they demonstrate a remarkable willingness to explore advanced AI capabilities, experiment with novel applications, and push the boundaries of what artificial intelligence can accomplish within organizational contexts. ... The ultimate result is organizational resilience that extends far beyond technical robustness. Comprehensive governance approaches that address technical performance and psychological factors create AI ecosystems that adapt gracefully to technological change, maintain continuity through system transitions, and sustain collaborative effectiveness across the inevitable evolution of artificial intelligence capabilities.


CISOs reposition their roles for business leadership

“The CISOs of the present and the future need to get out of being just technologists and build their influence muscle as well as their communication muscle,” Kapil says. They need to be able to “relay the technology and cyber messaging in words and meanings where a non-technologist actually understands why we’re doing what we’re doing.” ... “CISOs who are enablers can have the greatest impact on the business because they understand the business objectives,” LeMaire explains. “I like to say we don’t do cybersecurity for cybersecurity’s sake. … Ultimately, we do cybersecurity to contribute to the goals, missions, and objectives of the greater organization. When you’re an enabler that’s what you’re doing.” ... The BISO role emerged to bridge the gap between business objectives and cybersecurity oversight that has existed in many companies, Petrik says. “By acting as a liaison between business, technology, and cybersecurity teams, the BISO ensures that security measures are aligned with business strategies and integrated effectively,” he says. Digital transformation, emerging technologies, and rapid innovation are business mandates, and security teams add value and manage risk better when they are involved before a platform is selected or implemented, he says.


Balancing Safety and Security in Software-Defined Vehicles

Features such as Bluetooth, Wi-Fi, and cellular networks improve user convenience but create multiple attack vectors. For example, infotainment systems, because of their connectivity, are prime targets on software-defined vehicles. The recent Nissan LEAF hack revealed exactly this vulnerability, with researchers using the vehicle’s infotainment system as an entry point to access critical vehicle controls, including the steering. Not only can attackers gain access to data and location information, they can use vulnerable infotainment systems as an on-ramp to access other critical vehicle systems, like Advanced Driver Assistance Systems (ADAS), CAN-Bus, or key engine control units. ... Real-Time Operating Systems play a key role in the functionality of software-defined vehicles, as they enable precise, time-critical operations for systems like Electronic Control Units (ECUs). ECUs are primarily programmed in C and C++ due to the need for efficiency and performance in resource-constrained environments. ... Memory-based vulnerabilities, inherent to C/C++ programming, can be exploited to enable remote code execution, potentially compromising critical safety and performance systems. This creates serious cybersecurity and reliability concerns for vehicles. As RTOS suppliers manage numerous processes, any vulnerability in their codebase can be a gateway for attackers, increasing the likelihood of malicious exploits across the interconnected vehicle ecosystem.


The agile blueprint for simplifying performance management: Rethinking reviews for real impact

Understanding performance has a psychological side to it. Recognising this effect on performance frameworks, Rashmi suggested that imposter syndrome can be mitigated by making progress visible. “When you see your results in real time, you can’t keep criticising yourself.” The panellists encouraged managers to have personal discussions with their team members, which would help them build bonds. Rashmi highlighted this aspect, which can be leveraged through AI. “If AI says that there has been no potential feedback for the employee in the last month, then let the technology help the manager remind.” She also added, “Scaling up makes the quarterly reviews an exercise; hence, spontaneous quarterly check-ins are important.” Rashmi also advocated for weekly, human-centred check-ins, features that are integrated in HRStop, where it won’t be just about tracking project status, but to understand employees as people. “Treat it like a family discussion,” Rashmi recommended. “A touch of personal conversation builds deeper rapport.” Another aspect that came up in the discussion was coaching. Vimal emphasised that coaching must happen at all levels—from CXOs to interns. “It’s this cultural consistency that builds trust, retention, and performance”, he added.


Is this the perfect use case for police facial recognition?

First, as the judge noted, “fortunately the technology available prevented physical contact going further”. Availability is important here, not just in terms of the equipment being accessible; it has a specific legal element too. Where the technological means to prevent inhumane or degrading treatment are reasonably available to the police, the law in England and Wales may not just permit the use of remote biometric technology, it may even require it. I’m unaware of anyone relying on this human rights argument yet and we won’t know if these conditions would have met that threshold. ... Second, the person was on the watchlist because he was subject to a court order. This was not the public under ‘general surveillance’: a court had been satisfied on the evidence presented that an order was necessary to protect the public from sexual harm from him. He breached that order by insinuating himself into the life of a 6-year-old girl and was found alone with her. He was accurately matched with the watchlist image. The third feature is that the technology did its job. It would be easy to celebrate this as a case of ‘thank goodness nothing happened’ but that would underestimate its significance and miss the legal areas where FRT will be challenged. 


IT leaders’ top 5 barriers to AI success

Data quality issues are a real concern and an actual barrier to AI adoption, but the problem is much larger than the traditional and typical discussion about data quality in transactional or analytical environments, says John Thompson, senior vice president and principal at AI consulting firm The Hackett Group. “With gen AI, literally 100% of an organization’s data, documents, videos, policies, procedures, and more are available for active use,” Thompson says. This is a much larger issue than data quality in systems such as enterprise resource planning (ERP) or customer relationship management (CRM), he says. ... Organizations need the infrastructure in place to educate and train its employees to understand the capabilities and limitations of AI, Ally’s Muthukrishnan says. “Without the right training, adoption and utilization will not achieve the outcome you’re hoping for,” he adds. “While I believe AI is one of the largest tech transformations of our lifetime, integrating it into day-to-day processes is a huge change management undertaking.” ... “The skills gap is only going to grow,” Hackett Group’s Thompson says. “Now is the time to start. You can start with your team. Have them work on test cases. Have them work on personal projects. Have them work on passion projects. [Taking] time for everyone to take a class is just elongating the process to close the skills gap. ...”


Google’s Cloud IDP Could Replace Platform Engineering

Much of the work behind the Google Cloud IDP comes from Anna Berenberg, an engineering fellow with Google Cloud who has been with the company for 19 years. “She is the originator of a lot of these concepts overall … many of these ideas which I did not really understand the impact of until I saw it manifest itself,” said Seroter. “She had this vision that I did not even buy into three years ago. She saw a little further ahead from there, and she has built and published things. It is impressive to have such interesting engineering thought leadership, not just applied to how Google does platforms, but now turning that into how we can change … infrastructure to make it simpler. She is a pioneer of that.” In an interview with The New Stack, Berenberg said that her ideas on the IDP came to her when she looked at how this could all work using Google’s vast compute and services resources to reimagine how platform engineering could be improved. “The way it works is you have a cloud platform, and then on top of it is this thick layer of platform engineering stuff, right?” said Berenberg. “So, platform engineering teams are building a layer on top of infrastructure cloud to do an abstraction and workflows and whatever they need” to improve processes for developers. “It shrinks down because everything shifts down to the platform and now we are providing platform engineering. “


FakeCaptcha Infrastructure HelloTDS Infects Millions of Devices With Malware

The campaign’s cunning blend of social engineering and technical subterfuge has enabled threat actors to compromise systems across a vast array of regions, targeting unsuspecting users as they consume streaming media, download shared files, or even browse legitimate-appearing websites. Gendigital researchers first identified HelloTDS as an intricate Traffic Direction System (TDS) — a malicious decision engine that leverages device and network fingerprinting to select which visitors receive harmful payloads, ranging from infostealers like LummaC2 to fraudulent browser updates and tech support scams. Entry points for the menace include compromised or attacker-operated file-sharing portals, streaming sites, pornographic platforms, and even malvertising embedded in seemingly innocuous ad spots. The system’s filtering and redirection logic allows it to avoid obvious honeytraps such as virtual machines, VPNs, or known analyst environments, significantly complicating detection and takedown efforts. The scale of the campaign is staggering. Gen’s telemetry reported over 4.3 million attempted infections within just two months, with the highest impact in the United States, Brazil, India, Western Europe, and, proportionally, several Balkan and African countries.


Cutting-Edge ClickFix Tactics Snowball, Pushing Phishing Forward

ClickFix first came to light as an attack method last year when Proofpoint researchers observed compromised websites serving overlay error messages to visitors. The message claimed that a faulty browser update was causing problems, and asked the victim to open "Windows PowerShell (Admin)" (which will open a User Account Control (UAC) prompt) and then right-click to paste code that supposedly "fixed" the problem — hence the attack name. Instead of a fix, though, users were unwittingly installing malware — in that case, it was the Vidar stealer. ... "The goals of ClickFix campaigns vary depending on the attacker," says Nathaniel Jones, vice president of security and AI strategy at Darktrace. "The aim might be to infect as many systems as possible to build out a network of proxies to use later. Some attackers are trying to exfiltrate credentials or domain controller files and then sell to other threat actors for initial access. So there isn't one type of victim or one objective — the tactic is flexible and being used in different ways." ... The approach, and ClickFix in general, represents a significant innovation in the world of phishing, according to Jones, because unlike an email asking someone to click on a typosquatted link that can be easily checked, the entire attack takes place inside the browser.


Like humans, AI is forcing institutions to rethink their purpose

The institutions in place now were not designed for this moment. Most were forged in the Industrial Age and refined during the Digital Revolution. Their operating models reflect the logic of earlier cognitive regimes: stable processes, centralized expertise and the tacit assumption that human intelligence would remain preeminent. ... But the assumptions beneath these structures are under strain. AI systems now perform tasks once reserved for knowledge workers, including summarizing documents, analyzing data, writing legal briefs, performing research, creating lesson plans and teaching, coding applications and building and executing marketing campaigns. Beyond automation, a deeper disruption is underway: The people running these institutions are expected to defend their continued relevance in a world where knowledge itself is no longer as highly valued or even a uniquely human asset. ... This does not mean institutional collapse is inevitable. But it does suggest that the current paradigm of stable, slow-moving and authority-based structures may not endure. At a minimum, institutions are under intense pressure to change. If institutions are to remain relevant and play a vital role in the age of AI, they must become more adaptive, transparent and attuned to the values that cannot readily be encoded in algorithms: human dignity, ethical deliberation and long-term stewardship.

Daily Tech Digest - May 10, 2025


Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.



Building blocks – what’s required for my business to be SECURE?

Zero Trust Architecture involves a set of rules that will ensure that you will not let anyone in without proper validation. You will assume there is a breach. You will reduce privileges to their minimum and activate them only as needed and you will make sure that devices connecting to your data are protected and monitored. Enclave is all about aligning your data’s sensitivity with your cybersecurity requirements. For example, to download a public document, no authentication is required, but to access your CRM, containing all your customers’ data, you will require a username, password, an extra factor of authentication, and to be in the office. You will not be able to download the data. Two different sensitivities, two experiences. ... The leadership team is the compass for the rest of the company – their north star. To make the right decision during a crisis, you much be prepared to face it. And how do you make sure that you’re not affected by all this adrenaline and stress that is caused by such an event? Practice. I am not saying that you must restore all your company’s backups every weekend. I am saying that once a month, the company executives should run through the plan. ... Most plans that were designed and rehearsed five years ago are now full of holes. 


Beyond Culture: Addressing Common Security Frustrations

A majority of security respondents (58%) said they have difficulty getting development to prioritize remediation of vulnerabilities, and 52% reported that red tape often slows their efforts to quickly fix vulnerabilities. In addition, security respondents pointed to several specific frustrations related to their jobs, including difficulty understanding security findings, excessive false positives and testing happening late in the software development process. ... If an organization sees many false positives, that could be a sign that they haven’t done all they can to ensure their security findings are high fidelity. Organizations should narrow the focus of their security efforts to what matters. That means traditional static application security testing (SAST) solutions are likely insufficient. SAST is a powerful tool, but it loses much of its value if the results are unmanageable or lack appropriate context. ... Although AI promises to help simplify software development processes, many organizations still have a long road ahead. In fact, respondents who are using AI were significantly more likely than those not using AI to want to consolidate their toolchain, suggesting that the proliferation of different point solutions running different AI models could be adding complexity, not taking it away.


Significant Gap Exists in UK Cyber Resilience Efforts

A persistent lack of skilled cybersecurity professionals in the civil service is one reason for the persistent gap in resilience, parliamentarians wrote. "Government has been unwilling to pay the salaries necessary to hire the experienced and skilled people it desperately needs to manage its cybersecurity effectively." Government figures show the workforce has grown and there are plans to recruit more experts - but a third of cybersecurity roles are either vacant "or filled by expensive contractors," the report states. "Experience suggests government will need to be realistic about how many of the best people it can recruit and retain." The report also faults government departments for not taking sufficient ownership over cybersecurity. The prime minister's office for years relied on departments to perform a cybersecurity self-assessment, until in 2023 when it launched GovAssure, a program to bring in independent assessors. GovAssure turned the self-assessments on their head, finding that the departments that ranked themselves the highest through self-assessment were among the less secure. Continued reliance on legacy systems have figured heavily in recent critiques of British government IT, and it does in the parliamentary report, as well. "It is unacceptable that the center of government does not know how many legacy IT systems exist in government and therefore cannot manage the associated cyber risks."


How CIOs Can Boost AI Returns With Smart Partnerships

CIOs face an overwhelming array of possibilities, making prioritization critical. The CIO Playbook 2025 helps by benchmarking priorities across markets and disciplines. Despite vast datasets, data challenges persist as only a small, relevant portion is usable after cleansing. Generative AI helps uncover correlations humans might miss, but its outputs require rigorous validation for practical use. Static budgets, growing demands and a shortage of skilled talent further complicate adoption. Unlike traditional IT, AI affects sales, marketing and customer service, necessitating cross-departmental collaboration. For example, Lenovo's AI unifies customer service channels such as email and WhatsApp, creating seamless interactions. ... First, go slow to go fast. Spend days or months - not years - exploring innovations through POCs. A customer who builds his or her own LLM faces pitfalls; using existing solutions is often smarter. Second, prioritize cross-collaboration, both internally across departments and externally with the ecosystem. Even Lenovo, operating in 180 markets, relies on partnerships to address AI's layers - the cloud, models, data, infrastructure and services. Third, target high-ROI functions such as customer service, where CIOs expect a 3.6-fold return, to build boardroom support for broader adoption.


How to Stop Increasingly Dangerous AI-Generated Phishing Scams

With so many avenues of attack being used by phishing scammers, you need constant vigilance. AI-powered detection platforms can simultaneously analyze message content, links, and user behavior patterns. Combined with sophisticated pattern recognition and anomaly identification techniques, these systems can spot phishing attempts that would bypass traditional signature-based approaches. ... Security awareness programs have progressed from basic modules to dynamic, AI-driven phishing simulations reflecting real-world scenarios. These simulations adapt to participant responses, providing customized feedback and improving overall effectiveness. Exposing team members to various sophisticated phishing techniques in controlled environments better prepares them for the unpredictable nature of AI-powered attacks. AI-enhanced incident response represents another promising development. AI systems can quickly determine an attack's scope and impact by automating phishing incident analysis, allowing security teams to respond more efficiently and effectively. This automation not only reduces response time but also helps prevent attacks from spreading by rapidly isolating compromised systems. 


Immutable Secrets Management: A Zero-Trust Approach to Sensitive Data in Containers

We address the critical vulnerabilities inherent in traditional secrets management practices, which often rely on mutable secrets and implicit trust. Our solution, grounded in the principles of Zero-Trust security, immutability, and DevSecOps, ensures that secrets are inextricably linked to container images, minimizing the risk of exposure and unauthorized access. We introduce ChaosSecOps, a novel concept that combines Chaos Engineering with DevSecOps, specifically focusing on proactively testing and improving the resilience of secrets management systems. Through a detailed, real-world implementation scenario using AWS services and common DevOps tools, we demonstrate the practical application and tangible benefits of this approach. The e-commerce platform case study showcases how immutable secrets management leads to improved security posture, enhanced compliance, faster time-to-market, reduced downtime, and increased developer productivity. Key metrics demonstrate a significant reduction in secrets-related incidents and faster deployment times. The solution directly addresses all criteria outlined for the Global Tech Awards in the DevOps Technology category, highlighting innovation, collaboration, scalability, continuous improvement, automation, cultural transformation, measurable outcomes, technical excellence, and community contribution.


The Network Impact of Cloud Security and Operations

Network security and monitoring also change. With cloud-based networks, the network staff no longer has all its management software under its direct control. It now must work with its various cloud providers on security. In this environment, some small company network staff opt to outsource security and network management to their cloud providers. Larger companies that want more direct control might prefer to upskill their network staff on the different security and configuration toolsets that each cloud provider makes available. ... The move of applications and systems to more cloud services is in part fueled by the growth of citizen IT. This is when end users in departments have mini IT budgets and subscribe to new IT cloud services, of which IT and network groups aren't always aware. This creates potential security vulnerabilities, and it forces more network groups to segment networks into smaller units for greater control. They should also implement zero-trust networks that can immediately detect any IT resource, such as a cloud service, that a user adds, subtracts or changes on the network. ... Network managers are also discovering that they need to rewrite their disaster recovery plans for cloud. The strategies and operations that were developed for the internal network are still relevant. 


Three steps to integrate quantum computing into your data center or HPC facility

Just as QPU hardware has yet to become commoditized, the quantum computing stack remains in development, with relatively little consistency in how machines are accessed and programmed. Savvy buyers will have an informed opinion on how to leverage software abstraction to accomplish their key goals. With the right software abstractions, you can begin to transform quantum processors from fragile, research-grade tools into reliable infrastructure for solving real-world problems. Here are three critical layers of abstraction that make this possible. First, there’s hardware management. Quantum devices need constant tuning to stay in working shape, and achieving that manually takes serious time and expertise. Intelligent autonomy provided by specialist vendors can now handle the heavy lifting – booting, calibrating, and keeping things stable – without someone standing by to babysit the machine. Then there’s workload execution. Running a program on a quantum computer isn’t just plug-and-play. You usually have to translate your high-level algorithm into something that works with the quirks of the specific QPU being used, and address errors along the way. Now, software can take care of that translation and optimization behind the scenes, so users can just focus on building quantum algorithms and workloads that address key research or business needs.


Where Apple falls short for enterprise IT

First, enterprise tools in many ways could be considered a niche area of software. As a result, enterprise functionality doesn’t get the same attention as more mainstream features. This can be especially obvious when Apple tries to bring consumer features into enterprise use cases — like managed Apple Accounts and their intended integration with things like Continuity and iCloud, for example — and things like MDM controls for new features such a Apple Intelligence and low-level enterprise-specific functions like Declarative Device Management. The second reason is obvious: any piece of software that isn’t ready for prime time — and still makes it into a general release — is a potential support ticket when a business user encounters problems. ... Deployment might be where the lack of automation is clearest, but the issue runs through most aspects of Apple device and user onboarding and management. Apple Business Manager doesn’t offer any APIs that vendors or IT departments can tap into to automate routine tasks. This can be anything from redeploying older devices, onboarding new employees, assigning app licenses or managing user groups and privileges. Although Apple Business Manager is a great tool and it functions as a nexus for device management and identity management, it still requires more manual lifting than it should.


Getting Started with Data Quality

Any process to establish or update a DQ program charter must be adaptable. For example, a specific project management or a local office could start the initial DQ offering. As other teams see the program’s value, they would show initiative. In the meantime, the charter tenets change to meet the situation. So, any DQ charter documentation must have the flexibility to transform into what is currently needed. Companies must keep track of any charter amendments or additions to provide transparency and accountability. Expect that various teams will have overlapping or conflicting needs in a DQ program. These people will need to work together to find a solution. They will need to know the discussion rules to consistently advocate for the DQ they need and express their challenges. Ambiguity will heighten dissent. So, charter discussions and documentation must come from a well-defined methodology. As the white paper notes, clarity, consistency, and alignment sit at the charter’s core. While getting there can seem challenging, an expertly structured charter template can prompt critical information to show the way. ... The best practices documented by the charter stem from clarity, consistency, and alignment. They need to cover the DQ objectives mentioned above and ground DQ discussions.