Showing posts with label mental health. Show all posts
Showing posts with label mental health. Show all posts

Daily Tech Digest - January 17, 2026


Quote for the day:

"Success does not consist in never making mistakes but in never making the same one a second time." -- George Bernard Shaw



Expectations from AI ramp up as investors eye returns in 2026

Billions in investments and a concerted focus on the tech over the past few years has led to artificial intelligence (AI) completely transforming how major global industries work. Now, investors are finally expecting to see some returns. ... Investors will no longer be satisfied with AI’s potential future capabilities – they want measurable returns on investment (ROI), says Jiahao Sun, the CEO of Flock.ie, a platform that allows users to build, train and deploy AI models in a decentralised manner. AI investment is entering its “show me the money era”, he says. This isn’t to say that investments into AI will pause, but that investors will begin prioritising critical areas that give guaranteed returns. These could include agentic AI platforms that enable multi-agent orchestration; AI-native infrastructures built for scale, security and interoperability; data modernisation tools that unlock the full potential of unstructured data; and AI observability and safety tools that monitor, govern and refine agent behaviour in real time, explains Neeraj Abhyankar, the VP of Data and AI at R Systems. ... “Single-purpose tools will be absorbed into unified AI platforms. The era of juggling 10 different AI products is ending and the race to offer a complete, integrated experience will intensify,” he adds. Meanwhile, some experts say that the EU’s AI Act will – for better or for worse – prohibit European firms from experimenting with high-risk use cases for AI.


The Next S-Curve of Cybersecurity: Governing Trust in a New Converging Intelligence Economy

Cybersecurity has crossed a threshold where it no longer merely protects technology ~ it governs trust itself. In an era defined by AI-driven decision-making, decentralized financial systems, cloud-to-edge computing, and the approaching reality of quantum disruption, cyber risk is no longer episodic or containable. It is continuous, compounding, and enterprise-defining. What changed in 2025 wasn’t just the threat landscape. It was the architecture of risk. Identity replaced networks as the dominant attack surface. Software supply chains emerged as systemic liabilities. Machine intelligence ~ on both sides of the attack began evolving faster than the controls designed to govern it. For boards, investors, and executives, this marked the end of cybersecurity as a control function and the beginning of cybersecurity as a strategic mandate. ... The next S-curve of cybersecurity is not driven by better tooling. It is driven by a shift in how trust is architected and governed across a converging ecosystem. This new curve is defined by: Identity-centric security rather than network-centric defense; Data-aware protection instead of application-bound controls; Continuous assurance rather than point-in-time audits; and Integration with enterprise risk, governance, and capital strategy Cybersecurity evolves from a defensive posture into a trust architecture discipline ~ one that governs how intelligence, identity, data, and decisions interact at scale.


Why Mental Fitness Is Leadership's Next Frontier

The distinction Craze draws between mental health and mental fitness is crucial. Mental health, he explains, is ultimately about functioning—being sufficiently free from psychological injury or mental illness to show up and perform one's job. "Your mental health or illness is a private matter between yourself, and perhaps your family or physician, and is a matter of respecting your individual rights," he says. Mental fitness, by contrast, is about capacity. "Assuming you are mentally healthy enough to show up and perform your job, then mental fitness is all about how well your mind performs under load, over time, and in conditions of uncertainty," Craze explains. "Being mentally healthy is a baseline. Being mentally fit is what allows leaders to think clearly at hour ten, stay composed in conflict, and recover quickly after setbacks rather than slowly eroding away," he says. Here, the comparison to elite athletics is instructive. In professional sports, no one confuses being injury-free with being competition-ready. Leadership has been slower to make that distinction, even as today’s executives face sustained cognitive and emotional demands that would have been unthinkable a generation ago. ... One of the most persistent myths in leadership development, according to Craze, is the idea that thinking happens in some abstract cognitive space, detached from the body. "In reality, every act of judgment, attention and self-control has an underlying physiological component and cost," he says. 


Taking the Technical Leadership Path

Without technical alignment, individuals constantly touch the same codebase, adding their feature in the simplest way (for them) but often they do this without ensuring the codebase is kept consistent. Over time accidental complexity grows such as having five different libraries that do the same job, or seven different implementations of how an email or push notification is sent and when someone wants to make a future change to that area, their work is now much harder. ... There are plenty of resources available to develop leadership skills. Kua advised to break broader leadership skills into specific ones, such as coaching, mentoring, communicating, mediating, influencing, etc. Even when someone is not a formal leader, there are daily opportunities to practice these skills in the workplace, he said. ... Formal technical leaders are accountable for ensuring teams have enough technical leadership. One way of doing this is to cultivate an environment where everyone is comfortable stepping up and demonstrating technical leadership. When you do this well, this means everyone can demonstrate informal technical leadership. Formal leaders exist because not all teams are automatically healthy or high-performing. I’m sure every technical person can remember a team they’ve been on with two engineers constantly debating about which approach to take, and wish someone had stepped in to help the team reach a decision. In an ideal world, a formal leader wouldn’t be necessary, but it’s rare that teams live in the perfect world.


From model collapse to citation collapse: risks of over-reliance on AI in the academy

Model collapse is the slow erosion of a generative AI system grounded in reality as it learns more and more from machine-generated data rather than from human-generated content. As a result of model collapse, the AI model loses diversity in its outputs, reinforces its misconceptions, increases its confidence in its hallucinations and amplifies its biases. ... Among all the writing tasks involved in research, GenAI appears to be disproportionately good at writing literature reviews. ChatGPT and Google Gemini both have deep research features that try to take a deep dive into the literature on a topic, returning heavily sourced and relatively accurate syntheses of the related research, while typically avoiding the well-documented tendency to hallucinate sources altogether. In some ways, it should not be too surprising that these technologies thrive in this area because literature reviews are exactly the sort of thing GenAI should be good at: textual summaries that stay pretty close to the source material. But here is my major concern: while nothing is fundamentally wrong with the way GenAI surfaces sources for literature reviews, it risks exacerbating the citation Matthew effect that tools like Google Scholar have caused. Modern AI models largely thrive on a snapshot of the internet circa 2022. In fact, I suspect that verifiably pre-2022 datasets will become prized sources for future models, largely untainted by AI-generated content, in much the same way that pre-World War II steel is prized for its lack of radioactive contamination from nuclear testing. 


Why is Debugging Hard? How to Develop an Effective Debugging Mindset

Here’s how most developers debug code: Something is broken; Let me change the line; Let’s refresh (wishing the error would go away); Hmm… still broken!; Now, let me add a console.log(); Let me refresh again (Ah, this time it may…); Ok, looks like this time it worked! This is reaction-based debugging. It’s like throwing a stone in the dark or finding a needle in a haystack. It feels busy, it sounds productive, but it’s mostly guessing. And guessing doesn’t scale in programming. This approach and the guessing mindset make debugging hard for developers. The lack of a methodology and solid approach makes many devs feel helpless and frustrated, which makes the process feel much more difficult than coding. This is why we need a different mental model, a defined skillset to master the art of debugging. ... Good debuggers don’t fight bugs. They investigate them. They don’t start with the mindset of “How do I fix this?”. They start with, “Why must this bug exist?” This one question changes everything. When you ask about the existence of a bug, you go back to the history to collect information about the code, its changes, and its flow. Then, you feed this information through a “mental model” to make decisions that lead you to the fix. ... Once the facts are clear and assumptions are visible, the debugging makes its way forward. Now you’ll need to form a hypothesis. A hypothesis is a simple cause-and-effect statement: If this assumption is wrong, then the behaviour makes sense. If not, provide a fix.


Promptware Kill Chain – Five-Step Kill Chain Model for Analyzing Cyberthreats

While the security industry has focused narrowly on prompt injection as a catch-all term, the reality is far more complex. Attacks now follow systematic, sequential patterns: initial access through malicious prompts, privilege escalation by bypassing safety constraints, establishing persistence in system memory, moving laterally across connected services, and finally executing their objectives. This mirrors how traditional malware campaigns unfold, suggesting that conventional cybersecurity knowledge can inform AI security strategies. ... The promptware kill chain begins with Initial Access, where attackers insert malicious instructions through prompt injection—either directly from users or indirectly through poisoned documents retrieved by the system. The second phase, Privilege Escalation, involves jailbreaking techniques that bypass safety training designed to refuse harmful requests. ... Traditional malware achieves persistence through registry modifications or scheduled tasks. Promptware exploits the data stores that LLM applications depend on. Retrieval-dependent persistence embeds payloads in data repositories like email systems or knowledge bases, reactivating when the system retrieves similar content. Even more potent is retrieval-independent persistence, which targets the agent’s memory directly, ensuring the malicious instructions execute on every interaction regardless of user input.


AI SOC Agents Are Only as Good as the Data They Are Fed

If your telemetry is fragmented, your schemas are inconsistent, or your context is missing, you won’t get faster responses from AI SOC agents. You’ll just get faster mistakes. These agents are being built to excel at cybersecurity analysis and decision support. They are not constructed to wrangle data collection, cleansing, normalization, and governance across dozens of sources. ... Modern SOCs integrate telemetry from EDRs, cloud providers, identity, networks, SaaS apps, data lakes, and more. Normalizing all that into a common schema eliminates the constant “translation tax.” An agent that can analyze standardized fields once, and doesn’t have to re-learn CrowdStrike vs. Splunk Search Processing Language vs. vendor-specific JavaScript Object Notation, will make faster, more reliable decisions. ... If the agent must “crawl back” into five source systems to enrich an alert on its own, latency spikes and success rates drop. The right move is to centralize, normalize, and clean security data into an accessible store, like a data lake, for your AI SOC agents and continue streaming a distilled, security-relevant subset to the Security Information and Event Management (SIEM) platform for detections and cybersecurity analysts. Let the SIEM be the place where detections originate; let the lake be the place your agents do their deep thinking. The problem is that the industry’s largest SIEM, Endpoint Detection and Response (EDR), and Security Orchestration, Automation, and Response (SOAR) platforms are consolidating into vertically integrated ecosystems. ...”


IT portfolio management: Optimizing IT assets for business value

The enterprise’s most critical systems for conducting day-to-day business are a category unto themselves. These systems may be readily apparent, or hidden deep in a technical stack. So all assets should be evaluated as to how mission-critical they are. ... The goal of an IT portfolio is to contain assets that are presently relevant and will continue to be relevant well into the future. Consequently, asset risk should be evaluated for each IT resource. Is the resource at risk for vendor sunsetting or obsolescence? Is the vendor itself unstable? Does IT have the on-staff resources to continue running a given system, no matter how good it is (a custom legacy system written in COBOL and Assembler, for example)? Is a particular system or piece of hardware becoming too expense to run? Do existing IT resources have a clear path to integration with the new technologies that will populate IT in the future? ... Is every IT asset pulling its weight? Like monetary and stock investments, technologies under management must show they are continuing to produce measurable and sustainable value. The primary indicators of asset value that IT uses are total cost of ownership (TCO) and return on investment (ROI). TCO is what gauges the value of an asset over time. For instance, investments in new servers for the data center might have paid off four years ago, but now the data center has an aging bay of servers with obsolete technology and it is cheaper to relocate compute to the cloud.


Ransomware activity never dies, it multiplies

One of the most significant findings in the study involves extortion campaigns that do not rely on encryption. These attacks focus on stealing data and threatening to publish it, skipping the deployment of ransomware entirely. Encryption based attacks remained just above 4,700 incidents annually. When data theft extortion is included, total extortion incidents reached 6,182 in 2025. That represents a 23% increase compared with 2024. Snakefly, which runs the Cl0p ransomware operation, played a major role in this shift. These actors exploited vulnerabilities in widely used enterprise software to extract data at scale. Victims included large organizations in government and industry, with some campaigns affecting hundreds of companies through a single flaw. ... A newer ransomware strain tracked as Warlock drew attention due to its tooling and infrastructure. First observed in mid 2025, Warlock attacks exploited a zero day vulnerability in Microsoft SharePoint and used DLL sideloading for payload delivery. Analysis linked Warlock to tooling previously associated with Chinese espionage activity, including signed drivers and custom command frameworks. Some ransomware payloads appeared to be modified versions of leaked LockBit code, combined with older malware components. The study notes overlaps between ransomware activity and long running espionage campaigns, where ransomware deployment may serve operational or financial goals within broader intrusion efforts.

Daily Tech Digest - November 25, 2025


Quote for the day:

“Being kind to those who hate you isn’t weakness, it’s a different level of strength.” -- Dr. Jimson S


Invisible battles: How cybersecurity work erodes mental health in silence and what we can do about it

You’re not just solving puzzles. You’re responsible for keeping a digital fortress from collapsing under relentless siege. That kind of pressure reshapes your brain and not in a good way. ... One missed patch. One misconfigured access role. One phishing click. That’s all it takes to trigger a million-dollar disaster or worse: erode trust. You carry that weight. When something goes wrong, the guilt cuts deep. ... The business sees you as the blocker. The board sees you after the breach. And if you’re the lone cyber lead in an SME? You’re on an island, with no lifeboat. No peer to talk to, no outlet to decompress. Just mounting expectations and a growing feeling that nobody really gets what you do. ... The hero narrative still reigns; if you’re not burning out, you’re not trying hard enough. Speak up about being overwhelmed? You risk looking weak. Or worse, replaceable. So you hide it. You overcompensate. And eventually, you break, quietly. ... They expect you to know it all, yesterday. Certifications become survival badges. And with the wrong culture, they become the only form of recognition you get. Systemic chaos builds personal crisis. The toll isn’t abstract. It’s physical, emotional and measurable. ... Cybersecurity professionals are fighting two battles. One is against adversaries. The other is against a system that expects perfection, rewards self-sacrifice and punishes vulnerability.


How to Build Engineering Teams That Drive Outcomes, not Outputs

Aligning teams around clear outcomes reframes what success looks like. They go from saying “this is what we shipped” to “this is what changed” as their role evolves from delivering features to meaningful solutions. ... One way is by changing how teams refer to themselves. This might sound oversimplistic, but a simple shift in team name acts as a constant reminder that their impact is tethered to customer and business outcomes. ... Leaders should treat outcome-based teams as dynamic investments. Rigid predictions are the enemy of innovation. Instead, teams should regularly reevaluate goals, empower adaptation, and allow KPIs to evolve organically from real-world learnings. The desired outcomes don’t necessarily change, but how they are achieved can be fluid. This is how team priorities are defined, new business challenges are solved and evolving customer expectations are met. ... Breaking down engineering silos means reappraising what ownership looks like. If your team’s focus has evolved from “bug fixing” to “continually excellent user experience,” then success is no longer the domain of engineers alone. It’s a collective effort across product, design, and tech — working together as one team. ... Outcome-based teams go beyond a structural change — it’s a mindset shift. By challenging teams to focus on delivering impact, to stay aligned with evolving needs, and to collaborate more effectively, organizations can build durable, customer-centric teams that can grow, adapt, and never sit still.


Guardrails and governance: A CIO’s blueprint for responsible generative and agentic AI

Many in the industry are confusing the function of guardrails and thinking they’re a flimsy substitute for true oversight. This is a critical misconception that must be addressed. Guardrails and governance are not interchangeable; they are two essential parts of a single system of control. ... AI governance is the blueprint and the organization. It’s the framework of policies, roles, committees and processes that define what is acceptable, who is accountable and how you will monitor and audit all AI systems across the enterprise. Governance is the strategy and the chain of command. AI guardrails are the physical controls and the rules in the code. These are the technical mechanisms embedded directly into the AI system’s architecture, APIs and interfaces to enforce the governance policies in real-time. Guardrails are the enforcement layer. ... While we must distinguish between governance and guardrails, the reality of agentic AI has revealed a critical flaw: current soft guardrails are failing catastrophically. These controls are often probabilistic, pattern-based or rely on LLM self-evaluation, which is easily bypassed by an agent’s core capabilities: autonomy and composability. ... Generative AI creates; agentic AI acts. When an autonomous AI agent is making decisions, executing transactions or interacting with customers, the stakes escalate dramatically. Regulators, auditors and even internal stakeholders will demand to know why an agent took a particular action.


Age Verification, Estimation, Assurance, Oh My! A Guide To The Terminology

Age gating refers to age-based restrictions on access to online services. Age gating can be required by law or voluntarily imposed as a corporate decision. Age gating does not necessarily refer to any specific technology or manner of enforcement for estimating or verifying a user’s age. ... Age estimation is where things start getting creepy. Instead of asking you directly, the system guesses your age based on data it collects about you. This might include: Analyzing your face through a video selfie or photo; Examining your voice; Looking at your online behavior—what you watch, what you like, what you post; Checking your existing profile data. Companies like Instagram have partnered with services like Yoti to offer facial age estimation. You submit a video selfie, an algorithm analyzes your face, and spits out an estimated age range. Sounds convenient, right? ... Here’s the uncomfortable truth: most lawmakers writing these bills have no idea how any of this technology actually works. They don’t know that age estimation systems routinely fail for people of color, trans individuals, and people with disabilities. They don’t know that verification systems have error rates. They don’t even seem to understand that the terms they’re using mean different things. The fact that their terminology is all over the place—using “age assurance,” “age verification,” and “age estimation” interchangeably—makes this ignorance painfully clear, and leaves the onus on platforms to choose whichever option best insulates them from liability.


Aircraft cabin IoT leaves vendor and passenger data exposed

The cabin network works by having devices send updates to a central system, and other devices are allowed to receive only certain updates. In this system an authorized subscriber is any approved participant on the cabin network, usually a device or a software component that is allowed to receive a certain type of data. The privacy issue begins after the data arrives. Information is protected while it travels, but once it reaches a device that is allowed to read it, that device can view the entire message, including details it does not need for its task. The system controls who receives a message, but it does not control how much those devices can learn from it. The study finds that this creates the biggest risk inside the cabin. Trusted devices have valid credentials and follow all the rules, and they can examine messages closely enough to infer raw sensor readings that were never meant to be exposed. This internal risk matters because it influences how different suppliers share data and trust each other. Someone in the cabin might also try to capture wireless traffic, but the protections on the wireless link prevent them from reading the data as it travels.  ... The researchers found that these raw motion readings can carry extra clues such as small shifts linked to breathing, slight tremors or hints about a person’s body shape. Details like these show why movement data needs protection before it is shared across the cabin network.


Build Resilient cloudops That Shrug Off 99.95% Outages

If a guardrail lives only in a wiki, it’s not a guardrail, it’s an aspiration. We encode risk controls in Terraform so they’re enforced before a resource even exists. Tagging, encryption, backup retention, network egress—these are all policy. We don’t rely on code reviews to catch missing encryption on a bucket; the pipeline fails the plan. That’s how cloudops scales across teams without nag threads. ... Observability isn’t a pile of graphs; it’s a way to answer questions. We want traceability from request to database and back, structured logs that actually structure, and metrics that reflect user experience. ... Most teams benefit from a small set of “stop asking, here it is” dashboards: request volume and latency by endpoint, error rate by version, resource saturation by service, and database health with connection pools and slow query counts. We also wire deploy markers into traces and logs, so “What changed?” doesn’t require Slack archaeology. ... We don’t win medals for shipping fast; we win trust for shipping safely. Progressive delivery lets us test the actual change, in production, on a small slice before we blast everyone. We like canaries and feature flags together: canary catches systemic issues; flags let us disable risky code paths within a version. Every deployment should come with a baked-in rollback that doesn’t require a council meeting. ... Reliability with no cost controls is just a nicer way to miss your margin. We give cost the same respect as latency: we define a monthly budget per product and a change budget per release.


Anatomy of an AI agent knowledge base

“An internal knowledge base is essential for coordinating multiple AI agents,” says James Urquhart, field CTO and technology evangelist at Kamiwaza AI, maker of a distributed AI orchestration platform. “When agents specialize in different roles, they must share context, memory, and observations to act effectively as a collective.” Designed well, a knowledge base ensures agents have access to up-to-date and comprehensive organizational knowledge. Ultimately, this improves the consistency, accuracy, responsiveness, and governance of agentic responses and actions. ... Most knowledge bases include procedures and policies for agents to follow, such as style guides, coding conventions, and compliance rules. They might also document escalation paths, defining how to respond to user inquiries. ... Lastly, persistent memory helps agents retain context across sessions. Access to past prompts, customer interactions, or support tickets helps continuity and improves decision-making, because it enables agents to recognize patterns. But importantly, most experts agree you should make explicit connections between data, instead of just storing raw data chunks. ... At the core of an agentic knowledge base are two main components: an object store and a vector database for embeddings. Whereas a vector database is essential for semantic search, an object store checks multiple boxes for AI workloads: massive scalability without performance bottlenecks, rich metadata for each object, and immutability for auditability and compliance.


Trust, Governance, and AI Decision Making

Issues like bias, privacy, and explainability aren’t just technical problems requiring technical solutions. They have to be understood by everyone in the business. That said, the ideal governance structure depends on each company’s business model. ... The word ethics can feel very far from a developer’s everyday world. It can feel like a philosophical thing, whereas they need to write code and build solutions. Also, many of these issues weren’t part of their academic training, so we have to help them understand. ... Kahneman’s idea is that humans use two different cognitive modes when we make decisions. For everyday decisions and small, familiar problems—like riding a bicycle—we use what he called System One, or Thinking Fast, which is automatic and almost unconscious. In System Two, or Thinking Slow, we have this other way of making decisions that requires a lot of time and attention, either because we are confronted with a problem that’s not familiar to us or because we don’t want to make a mistake. ... We compare Thinking Fast to the data-driven machine learning approach—just give me a lot of data, and I will give you the solution without showing you how I got there or even being able to explain it. Thinking Slow, on the other hand, corresponds to a more traditional, rule-based approach to solving problems. ... It’s similar to what we see with agentic AI systems—the focus is not on any one solver, agent, or tool but rather in the governance of the whole system. 


The Global Race for Digital Trust: Where Does India Stand?

In the modern hyperconnected world, trust has replaced convenience as the true currency of digital engagement. Every transaction, whether on a banking app or an e-governance portal, is based on an unspoken belief: systems are secure and intentions are transparent. Nevertheless, this belief remains under constant pressure. ... India’s digital trust framework is further significantly reinforced with the inauguration of the National Centre for Digital Trust (NCDT) in July 2025. Established by the Ministry of Electronics and Information Technology (MeitY), this Centre serves as the national hub for digital assurance. It unites key elements, including public key infrastructure, authentication as well as post-quantum cryptography under a unified mission. This, in turn, signals the country’s commitment to treating trust as a public good. ... For firms and government agencies alike, compliance signals maturity. It reassures citizens that the systems they rely on, from hospital monitoring networks to smart city command centres, are governed by clear, ethical and verifiable standards. It also encourages global partners that India’s digital infrastructure can operate efficiently throughout jurisdictions. In the long run, this “compliance premium” could well define which countries earn the confidence to lead the global digital economy. ... The world will measure digital strength not by how fast technology advances, but by how deeply trust is embedded within it.


The privacy paradox is turning into a data centre weak point

While consumers’ failure to adopt basic cyber hygiene might seem like a personal problem, it has wide-reaching implications for infrastructure providers. As cloud services, hosted applications and mobile endpoints interact with backend systems, poor user behaviour becomes an attack vector. Insecure credentials, password reuse and unsecured mobile devices all provide potential entry points, especially in hybrid or multi-tenant environments. ... Putting data centres on an equal footing as water, energy and emergency services systems, will mean the data centre sector can now expect greater Government support in anticipating and recording critical incidents. This designation reflects their strategic importance but also brings greater regulatory scrutiny. It also comes against the backdrop of the UK Government’s Cyber Security Breaches Survey in 2024, which reported that 50% of businesses experienced some form of cyber breach in the past 12 months, with phishing accounting for 84% of incidents. This underscores how easily compromised direct or indirect endpoints can threaten core infrastructure. ... The privacy paradox may begin at the consumer level, but its consequences are absorbed by the entire digital ecosystem. Recognising this is the first step. Acting on it through better design, stronger defaults, and user-focused education allows data centre operators to safeguard not just their infrastructure, but the trust that underpins it.

Daily Tech Digest - June 20, 2024

Measure Success: Key Cybersecurity Resilience Metrics

“Cyber resilience is a newer concept. It can get thrown around when one really means cybersecurity, and also in cases where no one really cares about the difference between the two,” says Mike Macado, CISO at BeyondTrust, an identity and access security company. “And to be fair, there can be some blurring between the two. ... “Once the resilience objectives are clear, KPIs can be set to measure them. While there are many abstract possible KPIs, it is crucial to set meaningful and measurable KPIs that can indicate your cyber resilience level and not only tick the box,” says Kellerman. And what are the meaningful, core KPIs? “These include mean time to detect, mean time to respond, recovery time objective, recovery point objective, percentage of critical systems with exposures, employee awareness and phishing click-rates, and an overall assessment of leadership. These KPIs will properly assess your security controls and whether they are protecting your critical path assets, helping to ensure they’re capable of preventing threats.” Kellerman adds. ... “The ability to recover from a cybersecurity attack within a reasonable time that guarantees business continuity is a crucial indicator of resilience...” says Joseph Nwankpa


Most cybersecurity pros took time off due to mental health issues

“Cybersecurity professionals are at the forefront of a battle they know they are going to lose at some point, it is just a matter of time. It’s a challenging industry and businesses need to recognize that without motivation, cybersecurity professionals won’t be at the top of their game. We’ve worked with both cybersecurity and business leaders to understand the challenges the industry faces. What we’ve discovered shows just how difficult the job is and that there is a significant gap of understanding between the board and the professionals,” said Haris Pylarinos, CEO at Hack The Box. “We’re calling for business leaders to work more closely with cybersecurity professionals to make mental well-being a priority and actually provide the solutions they need to succeed. It’s not just the right thing to do, it makes business sense,” concluded Pylarinos. “Stress, burnout and mental health in cybersecurity is at an all-time high. It’s also not just the junior members of the team, but right up to the CISO level too,” said Sarb Sembhi, CTO at Virtually Informed.


Forget Deepfakes: Social Listening Might be the Most Consequential Use of Generative AI in Politics

Ultimately, the most vulnerable individuals likely to be affected by these trends are not voters; they are children. AI chatbots are already being piloted in classrooms. “Children are once again serving as beta testers for a new generation of digital tech, just as they did in the early days of social media,” writes Caroline Mimbs Nyce for The Atlantic. The risks from generative AI outputs are well documented, from hallucinatory responses to search inquiries to synthetic nonconsensual sexual imagery. Given the rapid normalization of surveillance in education technology, more attention should probably be paid to the inputs such systems collect from kids. ... Not every AI problem requires a policy solution specific to AI: a federal data privacy law that applied to campaigns and political action committees would go a long way toward regulating generative AI-enabled social listening, and could have been put in place long before that technology became widely accessible. The fake Biden robocalls in New Hampshire similarly commend low-tech responses to high-tech problems: the political consultant behind them is charged not with breaking any law against AI fakery but with violating laws against voter suppression.


Resilience in leadership: Navigating challenges and inspiring success

Research shows that cultivating resilience is a long and arduous journey that requires self-awareness, emotional intelligence, and a relentless commitment to personal growth. A great example of this quality and a leader I admire greatly is Jensen Huang, President of Nvidia, which is now one of the most valuable companies in the world with a market cap of more than $2 trillion. As Huang describes quite candidly in many interviews, his early years and the hardships he endured helped him build resilience, where he learnt to brush things off and move on no matter how difficult the situation was. While addressing the grad students at Stanford Graduate School of Business, Huang revealed that “I wish upon you ample doses of pain and suffering,” as he believes great character is only formed out of people who have suffered. These experiences have not only helped Huang develop a robust management style but have also helped him approach any problem with the mindset of “How hard can it be?” While Jensen’s life exemplifies the importance of hardships and suffering, resilience isn't limited to overcoming hardships; it's also about innovation and adaptability in leadership. 


IDP vs. Self-Service Portal: A Platform Engineering Showdown

It’s easy to get lost in the sea of IT acronyms at the best of times, and the platform engineering ecosystem is the same, particularly given that these two options seem to promise similar things but deliver quite differently. For many organizations, choosing or building an IDP might be what they think is required to save their developers from repetitive work while looking for a self-service portal (SSP) to streamline automation. ... By providing a user-friendly interface to define and deploy cloud resources, an SSP frees up the time and effort required to set up complex infrastructure configurations. Centralizing resources provides oversight while also enabling guardrails to be established to protect against “shadow IT” being deployed. This not only helps identify resources that aren’t being used to save money but also helps make cloud practices more eco-friendly by removing unnecessary resources. This is the main difference between an SSP and an IDP, and understanding which capabilities an organization needs is critical for ensuring a smooth platform engineering journey. Like a Russian doll, an IDP is a layer on top of an SSP that offers tools to streamline the entire software development lifecycle.


Chinese Hackers Used Open-Source Rootkits for Espionage

Attackers exploited an unauthenticated remote command execution zero-day on VMware vCenter tracked as CVE-2023-34048. If the threat group failed to gain initial access on the VMware servers, the attackers targeted similar flaws in FortiOS, a flaw in VMware vCenter called postgresDB, or a VMware Tools flaw. After compromising the edge devices, the group's pattern has been to deploy open-source Linux rootkit Reptile to target virtual machines hosted on the appliance. It uses four rootkit components to capture secure shell credentials. These include Reptile.CMD to hide files, processes and network connections; Reptile.Shell to listen to specialized packets- a kernel level file to modify the .CMD file to achieve rootkit functionality; and a loadable kernel file for decrypting the actual module and loading it into the memory. "Reptile appeared to be the rootkit of choice by UNC3886 as it was observed being deployed immediately after gaining access to compromised endpoints," Mandiant said. "Reptile offers both the common backdoor functionality, as well as stealth functionality that enables the threat actor to evasively access and control the infected endpoints via port knocking."


What are the benefits of open access networks?

Toomey says there are various benefits to open access networks, with a key benefit being the fostering of competition. “This competition drives innovation as providers strive to offer the best services and technologies to attract and retain customers,” she said. “Additionally, open access networks can reduce costs for service providers by sharing infrastructure, leading to more affordable services for end-users. “These networks also promote greater network efficiency and resource utilisation, benefiting the entire telecom ecosystem.” But there are challenges with building an open access network, as Toomey said there are high costs in building and maintaining the necessary infrastructure. Enet invested €50m in 2022 to expand its fibre network, but saw its profits fall 47pc to €3.7m in the same year. “Additionally, there is a risk of overbuild, where multiple networks are constructed in the same area, leading to inefficient resource use,” Toomey said. “Another challenge is the centralised thinking on network roll-out in cities, which can neglect rural and underserved areas, creating a digital divide. “Addressing these challenges requires strategic planning and investment, as well as collaboration with government and industry stakeholders to ensure balanced network development.”


CIOs take note: Platform engineering teams are the future core of IT orgs

The core roles in a platform engineering team range from infrastructure engineers, software developers, and DevOps tool engineers, to database administrators, quality assurance, API and security engineers, and product architects. In some cases teams may also include site reliability engineers, scrum masters, UI/UX designers, and analysts who assess performance data to identify bottlenecks. And according to Joe Atkinson, chief products and technology officer at PwC, these teams offer a long list of benefits to IT organizations, including building and maintaining scalable, flexible infrastructure and tools that enable efficient operations; developing standardized frameworks, libraries, and tools to enable rapid software development; cutting costs by consolidating infrastructure resources; and ensuring security and compliance at the infrastructure level. ... You can’t have a successful platform engineering team without building the right culture, says Jamie Holcombe, USPTO CIO. “If you don’t inspire the right behavior then you’ll get people who point at each other when something goes wrong.” And don’t withhold information, he adds. 


What is the current state of Security Culture in Europe?

Organizations prioritizing the establishment and upkeep of a security culture will encourage notably heightened security awareness behaviors among their employees. Examining this further, research has shown that organizations in Europe have a good understanding of security culture as both a process and a strategic measure. However, many have yet to take their first tactical steps toward achieving that goal. Those who have done so realize that shaping security behaviors is essential in developing a security culture. ... Delving deeper, smaller European organisations score higher in security culture due to more effective personal communication, stronger community bonds and better support for security issues. This naturally leads to enhanced Cognition and Compliance, with improvements in communication channels posited as a key driver for better security policy understanding and proactive security behaviours that outperform global averages. Conducting an examination of which industries displayed the best security culture within Europe, it is certainly gaining traction among security experts within sectors like finance, banking and IT, which are all heavily digitized.


Data Integrity: What It Is and Why It Matters

While data integrity focuses on the overall reliability of data in an organization, Data Quality considers both the integrity of the data and how reliable and applicable it is for its intended use. Preserving the integrity of data emphasizes keeping it intact, fully functional, and free of corruption for as long as it is needed. This is done primarily by managing how the data is entered, transmitted, and stored. By contrast, Data Quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and Data Quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. 



Quote for the day:

“Failures are finger posts on the road to achievement.” -- C.S. Lewis

Daily Tech Digest - April 19, 2024

Cloud cost management is not working

The Forrester report illuminates significant visibility challenges when using existing CCMO tools. Tracking expenses across different cloud activities, such as data management, egress charges, and application integration, remains a challenge. Finops is normally on the radar, but these enterprises have yet to adopt useful finops practices, with most programs either nonexistent or not yet off the ground, even if funded. Then there’s the fact that enterprises are not good at using these tools yet, and they seem to add more cost with little benefit. The assumption is that they will get better and costs will get under control. However, given the additional resource needs for AI deployments, improvements are not likely to occur for years. At the same time, there is no plan to provide IT with additional funding, and many companies are attempting to hold the line on spending. Despite these challenges, getting cloud spending under control continues to be a priority, even if results do not show that. This means major fixing needs to be done at the architecture and integration level, which most in IT view as overly complex and too expensive to fix. 


Why Selecting the Appropriate Data Governance Operating Model Is Crucial

When deciding on the data governance operating model, you cannot simply pick one approach without evaluating the benefits each one offers. You need to weigh the potential benefits of centralized and decentralized governance models before making a decision. If you find that the benefits of centralizing your governance operations exceed those of a decentralized model by at least 20%, then it’s best to centralize. With a centralized governance model, you can bridge the skills gap, enjoy consistent outcomes across all business units, easily report on operations, ensure executive buy-in at the C-level, and plan for effectiveness in continuous feedback elicitation, improvements, and change management. However, the downside is that it often leads to operation rigidity, which reduces motivation among mid-level managers, and bureaucracy often outweighs the benefits. It’s important to consider socio-cultural aspects when formulating your operating model, as they can significantly influence the success of your organization.


5 Steps Toward Military-Grade API Security

When evaluating client security, you must address environment-specific threats. In the browser, military grade starts with ensuring the best protections against token theft, where malicious JavaScript threats, also known as cross-site scripting (XSS), are the biggest concern. To reduce the impact of an XSS exploit, it is recommended to use the latest and most secure HTTP-only SameSite cookies to transport OAuth tokens to your APIs. Use a backend-for-frontend (BFF) component to issue cookies to JavaScript apps. The BFF should also use a client credential when getting access tokens. ... A utility API then does the cookie issuing on behalf of its SPA without adversely affecting your web architecture. In an OAuth architecture, clients obtain access tokens by running an OAuth flow. To authenticate users, a client uses the OpenID Connect standard and runs a code flow. The client sends request parameters to the authorization server and receives response parameters. However, these parameters can potentially be tampered with. For example, an attacker might replay a request and change the scope value in an attempt to escalate privileges.


Break Security Burnout: Combining Leadership With Neuroscience

The problem for cybersecurity pros is that they often get stuck in a constant state of psychological fight-or-flight response pattern due to the constant stress cycle of their jobs, Coroneos explains. iRest is a training that helps them switch out of this cycle to bring them to a deeper state of relaxation to reset that fight-or-flight response. This will help the brain switch off, so it is not constantly creating stress not only in the workplace but throughout their everyday lives, thus creating burnout, he says. "We need to get them into a position where they can come into a proper relationship into their subconscious," Coroneos says, adding that so far cybersecurity professionals who have experienced the training — which Cybermindz is currently piloting— report they are sleeping better and making clearer decisions after only a few sessions of the program. Indeed, while burnout remains a serious problem, the message Coroneos and Williams ultimately want to convey is one of hope that there are solutions to solve the burnout problem currently facing cybersecurity professionals, and that the enormous pressures these dedicated professionals face is not being overlooked.


Unlocking Customer Experience: The Critical Role of Your Supply Chain

It is crucial to find a partner that understands that digital transformation alone is not enough. Unlike point solution vendors who solve isolated problems, prioritize a partner that focuses on three main areas: people, processes, and systems. A good partner will begin its approach by understanding what is actually happening with mission-critical processes in the supply chain like inbound and outbound logistics, supplier management, customer service, help desk, and financial processes. Understanding these root causes helps identify opportunities for improvement and automation. Analyzing data and feedback reveals pain points, bottlenecks, and inefficiencies within each process. Utilizing process mapping and performance metrics helps pinpoint areas ripe for enhancement. Automation technologies, like AI and machine learning, streamline repetitive tasks, reducing errors and enhancing efficiency. By continuously assessing and optimizing these processes, businesses can improve responsiveness, reduce costs, and enhance overall supply chain performance, ultimately driving customer satisfaction and competitive advantage.


AI migration woes: Brain drain threatens India’s tech future

To address the challenge of talent migration, the biggest companies in India must work together to democratise access to resources and opportunities within its tech ecosystem. One key aspect of this approach involves fostering a culture of open collaboration among key stakeholders, including top-tier venture capitalists (VCs), corporates, academia and leading startups because no single entity can drive AI innovation in isolation. By creating a collaborative ecosystem where information is freely shared and resources are pooled, can level the playing field and provide equal opportunities for aspiring AI professionals across the nation. This could involve the establishment of platforms dedicated to knowledge exchange, networking events and cross-sector partnerships aimed towards accelerating innovation. ... In addition to these fundamental elements, the tech ecosystem in India must also prioritise accessibility and affordability in the adoption of AI-integrated technologies. The future-ready benefits of AI should be democratised, reaching not only large brands but also small and medium-sized enterprises (SMEs), startups and grassroots organisations. 


Are you a toxic cybersecurity boss? How to be a better CISO

Though most CISOs treat their employees fairly, CISOs are human beings — with all the frailties, quirks, and imperfections of the human condition. But CISOs behaving badly expose their own organizations to huge risks. ... One of the thorniest challenges of a toxic CISO is that the person causing the problem is also the one in charge, making them susceptible to blind spots about their own behavior. Nicole L. Turner, a specialist in workplace culture and leadership coaching, got a close-up look at this type of myopia when a top exec (in a non-security role) recently hired her to deliver leadership training to the department heads at his company. “He felt like they needed training because he could tell some things were going on with them, that they were burned out and overwhelmed. But as I’m training them, I notice these sidebar conversations [among his staff] that he was the problem, more so than the work itself. It was just such an ironic thing and he didn’t know,” recounts Turner, owner and chief culture officer at Nicole L. Turner Consulting in Washington, D.C. There’s also some truth to the adage that it’s lonely at the top, especially in a hypercompetitive corporate environment.


Who owns customer identity?

Onboarding users securely but still seamlessly is a constant conflict in many types of businesses, from retail, insurance, to fintech. ... If you are from a regulated industry, MFA becomes important. Make it a risk-based MFA, however, to reduce undue friction. If your business offers a D2C or B2C product or service, seamless onboarding is your number one priority. If user friction is the primary reason for your CIAM initiative, the product team or engineering team should take the lead and bring other teams along. If MFA is the main use case, the CISO should lead the discussions and then bring other teams along. ... If testing or piloting is possible, do so. Experimentation is very valuable in a CIAM context. Whether you are moving to a new CIAM solution, trying a new auth method, or changing your onboarding process in any other way, run a pilot or an A/B test first. Starting small, measuring the results, and taking longer-term decisions accordingly is a healthy cycle to follow when it comes to customer identity processes.


GenAI: A New Headache for SaaS Security Teams

The GenAI revolution, whose risks remain in the realm of the unknown unknown, comes at a time when the focus on perimeter protection is becoming increasingly outdated. Threat actors today are increasingly focused on the weakest links within organizations, such as human identities, non-human identities, and misconfigurations in SaaS applications. Nation-state threat actors have recently used tactics such as brute-force password sprays and phishing to successfully deliver malware and ransomware, as well as carry out other malicious attacks on SaaS applications. Complicating efforts to secure SaaS applications, the lines between work and personal life are now blurred when it comes to the use of devices in the hybrid work model. With the temptations that come with the power of GenAI, it will become impossible to stop employees from using the technology, whether sanctioned or not. The rapid uptake of GenAI in the workforce should, therefore, be a wake-up call for organizations to reevaluate whether they have the security tools to handle the next generation of SaaS security threats.


What CIOs Can Learn from an Attempted Deepfake Call

Defending against deepfake threats takes a multifaceted strategy. “There's a three-pronged approach where there's education, there's culture, and there's technology,” says Kosak. NINJIO focuses on educating people on cybersecurity risks, like deepfakes, with short, engaging videos. “If you can deepfake a voice and a face or an image based on just a little bit of information or maybe three to four seconds of that voice tone, that's sending us down a path that is going to require a ton of verification and discipline from the individual’s perspective,” says McAlmont. He argues that an hour or two of annual training is insufficient as threats continue to escalate. More frequent training can help increase employee vigilance and build that culture of talking about cybersecurity concerns. When it comes to training around deepfakes, awareness is key. These threats will continue to come. What does a deepfake sound or look like? (Pretty convincing in many cases.) What are some of the common signs that the person you hear or see isn’t who they say they are?



Quote for the day:

“A real entrepreneur is somebody who has no safety net underneath them.” -- Henry Kravis

Daily Tech Digest - April 07, 2024

AI advancements are fueling cloud infrastructure spending

The IDC report offers insights into the evolving landscape of cloud deployment infrastructure spending, explicitly focusing on AI. I’m not sure that anyone will push back on that. However, there are some other market dynamics that we should be paying attention to, namely:Tech leaders’ rapid deployment of AI capabilities is changing infrastructure requirements, emphasizing the need for specialized, high-performance hardware. However, this will likely translate quickly into storage and databases, which are more critical to AI than processing. Who would have thunk? The shift towards GPU-heavy servers at higher price points but fewer units sold reflects the evolving market dynamics influenced by the priorities of cloud providers and enterprise tech behemoths. As I pointed out, this could be a false objective that leads many, including the cloud providers, down the wrong path. ... The significant uptick in cloud infrastructure spending underscores a robust investment in AI-related capabilities, which has far-reaching implications for technology and business landscapes.


How to develop your skillset for the AI era

Grounded in a rich understanding of the broader context and enhanced by a diverse skill set, building specialization will ensure that engineers can bring unique insights, creativity, and solutions that AI cannot. It's the intersection of depth and breadth in an engineer's expertise that will define their irreplaceability in an AI-driven world. This is where Roger Martin's Doctrine of Relentless Utility comes into play, a career strategy that focuses on finding your niche and monopolizing it. As you become more adept at navigating between different roles and perspectives, you'll be better positioned to uncover unique opportunities where your particular blend of skills and interests intersect with unmet needs within your team or organization. Aligning what you're good at with areas where you can make a significant impact allows you to establish a distinctive role that plays to your strengths and passions. This strategy promotes an active, value-driven approach, looking for ways to contribute beyond the usual scope of your role. Your niche could be bridging the gap between advanced technical knowledge and non-technical stakeholders or clients.


A phish by any other name should still not be clicked

The proper way for enterprises to reach out on these matters is something like, “There is a new billing matter that requires your attention. Please log into your portal and look into it.” Why don’t most enterprises do that? Some blame a lack of training — and there is absolutely a lot of truth in that. But, it’s often quite deliberate and intentional. More responsible enterprises have tried doing this the proper way, but too many customers complained along the lines of, “Do you know how many portals I have to deal with? Give me a link to the portal you want me to use.” ... This gets us right back to the security-vs.-convenience nightmare. This problem is complicated because the situation is two-step. It’s not that the customer will be hurt if they click on your link. It’s that you’re inadvertently making them comfortable with clicking on an unknown link and they might get hurt two days from now when they encounter an actual phishing attack email. Will the enterprise be held liable, especially if you can’t prove the victim clicked because of what was sent? It gets even worse. The old advice used to be to mouseover suspicious links and make sure they’re legitimate. Today, that advice doesn’t work. 


How to keep humans in charge of AI

First, let users choose guardrails through the marketplace. We should encourage a large multiplicity of fine-tuned models. Different users, journalists, religious groups, civil organizations, governments and anyone else who wants to should be able to easily create customized versions of open-source base models that reflect their values and add their own preferred guardrails. Users should then be free to choose their preferred version of the model whenever they use the tool. This would allow companies that produce the base models to avoid, to the extent possible, having to be the “arbiters of truth” for AI. While this marketplace for fine-tuning and guardrails will lower the pressure on companies to some extent, it doesn’t address the problem of central guardrails. Some content — especially when it comes to images or video — will be so objectionable that it can’t be allowed across any fine-tuned models the company offers. ... How can companies impose centralized guardrails on these issues that apply to all the different fine-tuned models without coming right back to the politics problem Gemini has run head-long into? 


Managers tend to target loyal workers for exploitation, study finds

The researchers hypothesized that managers might view loyal employees as more exploitable, targeting them for exploitation. Alternatively, they considered whether managers might protect loyal workers to retain their allegiance. Four studies were conducted with participants ranging from 211 to 510 full-time managers, recruited via Prolific. In the first study, managers were split into three groups, with the first group reading about a loyal employee named John. The survey then described scenarios requiring someone to work overtime or perform uncomfortable tasks without compensation, querying the likelihood of assigning John to these tasks. The second and third groups underwent similar procedures, but with John described as either disloyal or without any characterization. All participants assessed John’s willingness to make personal sacrifices. ... “Given that workers who agree to participate in their own exploitation also acquire stronger reputations for loyalty, the bidirectional causal links between loyalty and exploitation have the potential to create a vicious circle of suffering for certain workers.” The study sheds light on the relationship between workers’ loyalty and behaviors of managers. 


Mastering the CISO role: Navigating the leadership landscape

CISOs must also cultivate stronger partnerships with their C-suite counterparts. IDC’s survey revealed discrepancies in how CISOs and CIOs perceive the CISO’s role, underscoring the need for better alignment. Creed recounted a recent example where the Allegiant Travel board made decisions about connected aircraft without involving the CISO, leading to a last-minute “fire drill” to address cyber security requirements. “Do you think the board, when they first started talking of going down this path of ‘we’re going to expand the fleet’, considered that there might be security implications in that?” he asked. ... To bridge this gap, CISOs must proactively educate executives on the business implications of security risks and advocate for a seat at the strategic decision-making table. As Russ Trainor, Senior Vice President of IT at the Denver Broncos, suggested, “Sometimes I’ll forward news of the breaches over to my CFO: here’s how much data was exfiltrated, here’s how much we think it cost. Those things tend to hit home.” The evolving CISO role demands a delicate balance of technical expertise, business acumen, and communication prowess. 


How companies are prioritising employee health for organisational success

The HR folks have a critical role in implementing wellness initiatives, believes Ritika. “Fostering a supportive work culture, providing resources for physical and mental health, and advocating for policies that prioritise employee well-being to attract and retain talent effectively are the key priorities for HR leaders are the key responsibilities of HR leaders.” According to Ritika, investing in employee health and well-being is not just a commitment but a cornerstone of organisational ethos. The Human Resource (HR) department plays a pivotal role in promoting and protecting the health of employees within an organisation. As per a report, an alarming 43% of Indian tech workers encounter health issues directly linked to their job responsibilities. Additionally, the study indicates that these health issues go beyond physical ailments, with almost 45% of respondents facing mental health challenges like stress, anxiety, and depression. Samra Rehman, Head of People and Culture, Hero Vired says that HR leaders are responsible for establishing policies and programs that prioritise employee well-being, such as implementing health insurance plans, offering gym memberships or fitness classes, and organising wellness workshops.


Decoding Synchronous and Asynchronous Communication in Cloud-Native Applications

The choice between synchronous and asynchronous communication patterns is not binary but rather a strategic decision based on the specific requirements of the application. Synchronous communication is easy to implement and provides immediate feedback, making it suitable for real-time data access, orchestrating dependent tasks, and maintaining transactional integrity. However, it comes with challenges such as temporal coupling, availability dependency, and network quality impact. On the other hand, asynchronous communication allows a service to initiate a request without waiting for an immediate response, enhancing the system’s responsiveness and scalability. It offers flexibility, making it ideal for scenarios where immediate feedback is not necessary. However, it introduces complexities in resiliency, fault tolerance, distributed tracing, debugging, monitoring, and resource management. In conclusion, designing robust and resilient communication systems for cloud-native applications requires a deep understanding of both synchronous and asynchronous communication patterns. 


Hackers Use Weaponized PDF Files to Deliver Byakugan Malware on Windows

Due to their high level of trust and popularity, hackers frequently use weaponized PDF files as attack vectors. Even PDFs can contain harmful codes or exploits that abuse the flaws in PDF readers. Once this malicious PDF is opened by a user unaware of it, the payload runs and infiltrates the system. ... FortiGuard Labs discovered a Portuguese PDF file distributing the multi-functional Byakugan malware in January 2024. The malicious PDF tricks people into clicking a link by presenting a blurred table. This in turn activates a downloader that puts a copy (requires.exe) and takes down DLL for DLL-hijacking. This runs require.exe to retrieve the main module (chrome.exe). In particular, the downloader behaves differently when called require.exe in temp because malware evasion is evident. FortiGuard Labs discovered a Portuguese PDF file distributing the multi-functional Byakugan malware in January 2024. The malicious PDF tricks people into clicking a link by presenting a blurred table. This in turn activates a downloader that puts a copy (requires.exe) and takes down DLL for DLL-hijacking.


Cybercriminal adoption of browser fingerprinting

While browser fingerprinting has been used by legitimate organizations to uniquely identify web browsers for nearly 15 years, it is now also commonly exploited by cybercriminals: a recent study shows one in four phishing sites using some form of this technique. ... Browser fingerprinting uses a variety of client-side checks to establish browser identities, which can then be used to detect bots or other undesirable web traffic. Numerous pieces of data can be collected as a part of fingerprinting, including:Time zone; Language settings; IP address; Cookie settings; Screen resolution; Browser privacy; User-agent string. Browser fingerprinting is used by many legitimate providers to detect bots misusing their services and other suspicious activity, but phishing site authors have also realized its benefits and are using the technique to avoid automated systems that might flag their website as phishing. By implementing their own browser fingerprinting controls loading their site content, threat actors are able to conceal phishing content in real-time. For example, Fortra has observed threat actors using browser fingerprinting to bypass the Google Ad review process. 




Quote for the day:

"What you do has far greater impact than what you say." -- Stephen Covey