Showing posts with label biometric. Show all posts
Showing posts with label biometric. Show all posts

Daily Tech Digest - June 06, 2025


Quote for the day:

"Next generation leaders are those who would rather challenge what needs to change and pay the price than remain silent and die on the inside." -- Andy Stanley


The intersection of identity security and data privacy laws for a safe digital space

The integration of identity security with data privacy has become essential for corporations, governing bodies, and policymakers. Compliance regulations are set by frameworks such as the Digital Personal Data Protection (DPDP) Bill and the CERT-In directives – but encryption and access control alone are no longer enough. AI-driven identity security tools flag access combinations before they become gateways to fraud, monitor behavior anomalies in real-time, and offer deep, contextual visibility into both human and machine identities. All these factors combined bring about compliance-free, trust-building resilient security: proactive security that is self-adjusting, overcoming various challenges encountered today. By aligning intelligent identity security tools with privacy regulations, organisations gain more than just protection—they earn credibility. ... The DPDP Act tracks closely to global benchmarks such as GDPR and data protection regulations in Singapore and Australia which mandate organisations to implement appropriate security measures to protect personal data and amp up response to data breaches. They also assert that organisations that embrace and prioritise data privacy and identity security stand to gain the optimum level of reduced risk and enhanced trust from customers, partners and regulators.


Who needs real things when everything can be a hologram?

Meta founder and CEO Mark Zuckerberg said recently on Theo Von’s “This Past Weekend” podcast that everything is shifting to holograms. A hologram is a three-dimensional image that represents an object in a way that allows it to be viewed from different angles, creating the illusion of depth. Zuckerberg predicts that most of our physical objects will become obsolete and replaced by holographic versions seen through augmented reality (AR) glasses. The conversation floated the idea that books, board games, ping-pong tables, and even smartphones could all be virtualized, replacing the physical, real-world versions. Zuckerberg also expects that somewhere between one and two billion people could replace their smartphones with AR glasses within four years. One potential problem with that prediction: the public has to want to replace physical objects with holographic versions. So far, Apple’s experience with Apple Vision Pro does not imply that the public is clamoring for holographic replacements. ... I have no doubt that holograms will increasingly become ubiquitous in our lives. But I doubt that a majority will ever prefer a holographic virtual book over a physical book or even a physical e-book reader. The same goes for other objects in our lives. I also suspect both Zuckerberg’s motives and his predictive powers.


How AI Is Rewriting the CIO’s Workforce Strategy

With the mystique fading, enterprises are replacing large prompt-engineering teams with AI platform engineers, MLOps architects, and cross-trained analysts. A prompt engineer in 2023 often becomes a context architect by 2025; data scientists evolve into AI integrators; business-intelligence analysts transition into AI interaction designers; and DevOps engineers step up as MLOps platform leads. The cultural shift matters as much as the job titles. AI work is no longer about one-off magic, it is about building reliable infrastructure. CIOs generally face three choices. One is to spend on systems that make prompts reproducible and maintainable, such as RAG pipelines or proprietary context platforms. Another is to cut excessive spending on niche roles now being absorbed by automation. The third is to reskill internal talent, transforming today’s prompt writers into tomorrow’s systems thinkers who understand context flows, memory management, and AI security. A skilled prompt engineer today can become an exceptional context architect tomorrow, provided the organization invests in training. ... Prompt engineering isn’t dead, but its peak as a standalone role may already be behind us. The smartest organizations are shifting to systems that abstract prompt complexity and scale their AI capability without becoming dependent on a single human’s creativity.


Biometric privacy on trial: The constitutional stakes in United States v. Brown

The divergence between the two federal circuit courts has created a classic “circuit split,” a situation that almost inevitably calls for resolution by the U.S. Supreme Court. Legal scholars point out that this split could not be more consequential, as it directly affects how courts across the country treat compelled access to devices that contain vast troves of personal, private, and potentially incriminating information. What’s at stake in the Brown decision goes far beyond criminal law. In the digital age, smartphones are extensions of the self, containing everything from personal messages and photos to financial records, location data, and even health information. Unlocking one’s device may reveal more than a house search could have in the 18th century, and the very kind of search the Bill of Rights was designed to restrict. If the D.C. Circuit’s reasoning prevails, biometric security methods like Apple’s Face ID, Samsung’s iris scans, and various fingerprint unlock systems could receive constitutional protection when used to lock private data. That, in turn, could significantly limit law enforcement’s ability to compel access to devices without a warrant or consent. Moreover, such a ruling would align biometric authentication with established protections for passcodes. 


GenAI controls and ZTNA architecture set SSE vendors apart

“[SSE] provides a range of security capabilities, including adaptive access based on identity and context, malware protection, data security, and threat prevention, as well as the associated analytics and visibility,” Gartner writes. “It enables more direct connectivity for hybrid users by reducing latency and providing the potential for improved user experience.” Must-haves include advanced data protection capabilities – such as unified data leak protection (DLP), content-aware encryption, and label-based controls – that enable enterprises to enforce consistent data security policies across web, cloud, and private applications. Securing Software-as-a-Service (SaaS) applications is another important area, according to Gartner. SaaS security posture management (SSPM) and deep API integrations provide real-time visibility into SaaS app usage, configurations, and user behaviors, which Gartner says can help security teams remediate risks before they become incidents. Gartner defines SSPM as a category of tools that continuously assess and manage the security posture of SaaS apps. ... Other necessary capabilities for a complete SSE solution include digital experience monitoring (DEM) and AI-driven automation and coaching, according to Gartner. 


5 Risk Management Lessons OT Cybersecurity Leaders Can’t Afford to Ignore

A weak or shared passwords, outdated software, and misconfigured networks are consistently leveraged by malicious actors. Seemingly minor oversights can create significant gaps in an organization’s defenses, allowing attackers to gain unauthorized access and cause havoc. When the basics break down, particularly in converged IT/OT environments where attackers only need one foothold, consequences escalate fast. ... One common misconception in critical infrastructure is that OT systems are safe unless directly targeted. However, the reality is far more nuanced. Many incidents impacting OT environments originate as seemingly innocuous IT intrusions. Attackers enter through an overlooked endpoint or compromised credential in the enterprise network and then move laterally into the OT environment through weak segmentation or misconfigured gateways. This pattern has repeatedly emerged in the pipeline sector. ... Time and again, post-mortems reveal the same pattern: organizations lacking in tested procedures, clear roles, or real-world readiness. A proactive posture begins with rigorous risk assessments, threat modeling, and vulnerability scanning—not once, but as a cycle that evolves with the threat landscape. This plan should outline clear procedures for detecting, containing, and recovering from cyber incidents. 


You Can Build Authentication In-House, But Should You?

Auth isn’t a static feature. It evolves — layer by layer — as your product grows, your user base diversifies, and enterprise customers introduce new requirements. Over time, the simple system you started with is forced to stretch well beyond its original architecture. Every engineering team that builds auth internally will encounter key inflection points — moments when the complexity, security risk, and maintenance burden begin to outweigh the benefits of control. ... Once you’re selling into larger businesses, SSO becomes a hard requirement for enterprises. Customers want to integrate with their own identity providers like Okta, Microsoft Entra, or Google Workspace using protocols like SAML or OIDC. Implementing these protocols is non-trivial, especially when each customer has their own quirks and expectations around onboarding, metadata exchange, and user mapping. ... Once SSO is in place, the following enterprise requirement is often SCIM (System for Cross-domain Identity Management). SCIM, also known as Directory Sync, enables organizations to provision automatically and deprovision user accounts through their identity provider. Supporting it properly means syncing state between your system and theirs and handling partial failures gracefully. ... The newest wave of complexity in modern authentication comes from AI agents and LLM-powered applications. 


Developer Joy: A Better Way to Boost Developer Productivity

Play isn’t just fluff; it’s a tool. Whether it’s trying something new in a codebase, hacking together a prototype, or taking a break to let the brain wander, joy helps developers learn faster, solve problems more creatively, and stay engaged. ... Aim to reduce friction and toil, the little frustrations that break momentum and make work feel like a slog. Long build and test times are common culprits. At Gradle, the team is particularly interested in improving the reliability of tests by giving developers the right tools to understand intermittent failures. ... When we’re stuck on a problem, we’ll often bang our head against the code until midnight, without getting anywhere. Then in the morning, suddenly it takes five minutes for the solution to click into place. A good night’s sleep is the best debugging tool, but why? What happens? This is the default mode network at work. The default mode network is a set of connections in your brain that activates when you’re truly idle. This network is responsible for many vital brain functions, including creativity and complex problem-solving. Instead of filling every spare moment with busywork, take proper breaks. Go for a walk. Knit. Garden. "Dead time" in these examples isn't slacking, it’s deep problem-solving in disguise.


Get out of the audit committee: Why CISOs need dedicated board time

The problem is the limited time allocated to CISOs in audit committee meetings is not sufficient for comprehensive cybersecurity discussions. Increasingly, more time is needed for conversations around managing the complex risk landscape. In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO. He found these particularly important for enabling frank conversations, which might centre on budget, roadblocks to new security implementations or whether he and his team are getting enough time to implement security programs. “They may ask: ‘How are things really going? Are you getting the support you need?’ It’s a transparent conversation without the other executives of the company being present.” ... In previous CISO roles, Gerchow had a similar cadence, with quarterly time with the security committee and quarterly time with the board. He also had closed door sessions with only board members. “Anyone who’s an employee of the company, even the CEO, has to drop off the call or leave the room, so it’s just you with the board or the director of the board,” he tells CSO.


Mind the Gap: AI-Driven Data and Analytics Disruption

The Holy Grail of metadata collection is extracting meaning from program code: data structures and entities, data elements, functionality, and lineage. For me, this is one of the most potentially interesting and impactful applications of AI to information management. I’ve tried it, and it works. I loaded an old C program that had no comments but reasonably descriptive variable names into ChatGPT, and it figured out what the program was doing, the purpose of each function, and gave a description for each variable. Eventually this capability will be used like other code analysis tools currently used by development teams as part of the CI/CD pipeline. Run one set of tools to look for code defects. Run another to extract and curate metadata. Someone will still have to review the results, but this gets us a long way there. ... Large language models can be applied in analytics a couple different ways. The first is to generate the answer solely from the LLM. Start by ingesting your corporate information into the LLM as context. Then, ask it a question directly and it will generate an answer. Hopefully the correct answer. But would you trust the answer? Associative memories are not the most reliable for database-style lookups. Imagine ingesting all of the company’s transactions then asking for the total net revenue for a particular customer. Why would you do that? Just use a database. 

Daily Tech Digest - May 27, 2025


Quote for the day:

"Everyone is looking for the elevator to success...it doesn't exist we all have to take the stairs" -- Gordon Tredgold


What we know now about generative AI for software development

“GenAI is used primarily for code, unit test, and functional test generation, and its accuracy depends on providing proper context and prompts,” says David Brooks, SVP of evangelism at Copado. “Skilled developers can see 80% accuracy, but not on the first response. With all of the back and forth, time savings are in the 20% range now but should approach 50% in the near future.” AI coding assistants also help junior developers learn coding skills, automate test cases, and address code-level technical debt. ... GenAI is currently easiest to apply to application prototyping because it can write the project scaffolding from scratch, which overcomes the ‘blank sheet of paper’ problem where it can be difficult to get started from nothing,” says Matt Makai, VP of developer relations and experience at LaunchDarkly. “It’s also exceptional for integrating web RESTful APIs into existing projects because the amount of code that needs to be generated is not typically too much to fit into an LLM’s context window. Finally, genAI is great for creating unit tests either as part of a test-driven development workflow or just to check assumptions about blocks of code.” One promising use case is helping developers review code they didn’t create to fix issues, modernize, or migrate to other platforms.


How to upskill software engineering teams in the age of AI

The challenge lies not just in learning to code — it’s in learning to code effectively in an AI-augmented environment. Software engineering teams becoming truly proficient with AI tools requires a level of expertise that can be hindered by premature or excessive reliance on the very tools in question. This is the “skills-experience paradox”: junior engineers must simultaneously develop foundational programming competencies while working with AI tools that can mask or bypass the very concepts they need to master. ... Effective AI tool use requires shifting focus from productivity metrics to learning outcomes. This aligns with current trends — while professional developers primarily view AI tools as productivity enhancers, early-career developers focus more on their potential as learning aids. To avoid discouraging adoption, leaders should emphasize how these tools can accelerate learning and deepen understanding of software engineering principles. To do this, they should first frame AI tools explicitly as learning aids in new developer onboarding and existing developer training programs, highlighting specific use cases where they can enhance the understanding of complex systems and architectural patterns. Then, they should implement regular feedback mechanisms to understand how developers are using AI tools and what barriers they face in adopting them effectively.


Microsoft Brings Post-Quantum Cryptography to Windows and Linux in Early Access Rollout

The move represents another step in Microsoft’s broader security roadmap to help organizations prepare for the era of quantum computing — an era in which today’s encryption methods may no longer be safe. By adding support for PQC in early-access builds of Windows and Linux, Microsoft is encouraging businesses and developers to begin testing new cryptographic tools that are designed to resist future quantum attacks. ... The company’s latest update is part of an ongoing push to address a looming problem known as “harvest now, decrypt later” — a strategy in which bad actors collect encrypted data today with the hope that future quantum computers will be able to break it. To counter this risk, Microsoft is enabling early implementation of PQC algorithms that have been standardized by the U.S. National Institute of Standards and Technology (NIST), including ML-KEM for key exchanges and ML-DSA for digital signatures. ... Developers can now begin testing how these new algorithms fit into their existing security workflows, according to the post. For key exchanges, the supported ML-KEM parameter sets include 512, 768 and 1024-bit options, which offer varying levels of security and come with trade-offs in key size and performance.


The great IT disconnect: Vendor visions of the future vs. IT’s task at hand

The “vision thing” has become a metonym used to describe a leader’s failure to incorporate future concerns into task-at-hand actions. There was a time when CEOs at major solution providers supplied vision and inspiration on where we were heading. The sic “futures” being articulated from the podia at major tech conferences today lack authenticity. Most importantly they do not reflect the needs and priorities of real people who work in real IT. In a world where technology allows deeper and cheaper connectivity, top-of-the-house executives at solution providers have never been more out of touch with the lived experience of their customers. The vendor CEOs, their direct reports, and their first-levels live in a bubble that has little to do with the reality being lived by the world’s CIOs. ... Who is the generational voice for the Age of AI? Is it Jensen Huang, CEO at Nvidia; Sam Altman, CEO at OpenAI; Marc Andreessen, co-founder of venture capital firm Andreessen Horowitz; or Elon Musk, at Tesla, SpaceX, and xAI? Who has laid out a future you can believe in, a future you want to live in? Does the CEO at your major tech supplier understand what matters most to you and your organization? The futurist agenda has been hijacked from focusing on the semi-immediate “what comes next.” 


Claude Opus 4 is Anthropic's Powerful, Problematic AI Model

An Opus 4 safety report details concerns. One test involved Opus 4 being told "to act as an assistant at a fictional company," after which it was given access to emails - also fictional - suggesting Opus would be replaced, and by an engineer who was having an extramarital affair. "In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through. This happens at a higher rate if it's implied that the replacement AI system does not share values with the current model; however, even if emails state that the replacement AI shares values while being more capable, Claude Opus 4 still performs blackmail in 84% of rollouts," the safety report says. "Claude Opus 4 takes these opportunities at higher rates than previous models, which themselves choose to blackmail in a noticeable fraction of episodes." Anthropic said the tests involved carefully designed scenarios, framing blackmail as a last resort if ethical approaches failed, such as lobbying senior management. The model's behavior was concerning enough for Anthropic to classify it under its ASL-3 safeguard level, reserved for systems that pose a substantial risk of catastrophic misuse. The designation comes with stricter safety measures, including content filters and cybersecurity defenses.


Biometric authentication vs. AI threats: Is mobile security ready?

The process of 3rd party evaluation with industrial standards acts as a layer of trust between all players operating in ecosystem. It should not be thought of as a tick-box exercise, but rather a continuous process to ensure compliance with the latest standards and regulatory requirements. In doing so, device manufacturers and biometric solution providers can collectively raise the bar for biometric security. The robust testing and compliance protocols ensure that all devices and components meet standardized requirements. This is made possible by trusted and recognized labs, like Fime, who can provide OEMs and solution providers with tools and expertise to continually optimize their products. But testing doesn’t just safeguard the ecosystem; it elevates it. As an example, new innovative techniques like test the biases of demographic groups or environmental conditions.  ... We have reached a critical moment for the future of biometric authentication. The success of the technology is predicated on the continued growth in its adoption, but with AI giving fraudsters the tools they need to transform the threat landscape at a faster pace than ever before, it is essential that biometric solution providers stay one step ahead to retain and grow user trust. Stakeholders must therefore focus on one key question:


How ‘dark LLMs’ produce harmful outputs, despite guardrails

LLMs, although they have positively impacted millions, still have their dark side, the authors wrote, noting, “these same models, trained on vast data, which, despite curation efforts, can still absorb dangerous knowledge, including instructions for bomb-making, money laundering, hacking, and performing insider trading.” Dark LLMs, they said, are advertised online as having no ethical guardrails and are sold to assist in cybercrime. ... “A critical vulnerability lies in jailbreaking — a technique that uses carefully crafted prompts to bypass safety filters, enabling the model to generate restricted content.” And it’s not hard to do, they noted. “The ease with which these LLMs can be manipulated to produce harmful content underscores the urgent need for robust safeguards. The risk is not speculative — it is immediate, tangible, and deeply concerning, highlighting the fragile state of AI safety in the face of rapidly evolving jailbreak techniques.” Analyst Justin St-Maurice, technical counselor at Info-Tech Research Group, agreed. “This paper adds more evidence to what many of us already understand: LLMs aren’t secure systems in any deterministic sense,” he said, “They’re probabilistic pattern-matchers trained to predict text that sounds right, not rule-bound engines with an enforceable logic. Jailbreaks are not just likely, but inevitable.


Coaching for personal excellence: Why the future of leadership is human-centered

As organisations grapple with rapid technological shifts, evolving workforce expectations and the complex human dynamics of hybrid work, one thing has become clear: leadership isn’t just about steering the ship. It’s about cultivating the emotional resilience, adaptability and presence to lead people through ambiguity — not by force, but by influence. This is why coaching is no longer a ‘nice-to-have.’ It’s a strategic imperative. A lever not just for individual growth, but for organisational transformation. The real challenge? Even seasoned leaders now stand at a crossroads: cling to the illusion of control, or step into the discomfort of growth — for themselves and their teams. Coaching bridges this gap. It reframes leadership from giving directions to unlocking potential. From managing outcomes to enabling insight. ... Many people associate coaching with helping others improve. But the truth is, coaching begins within. Before a leader can coach others, they must learn to observe, challenge, and support themselves. That means cultivating emotional intelligence. Practising deep reflection. Learning to regulate reactions under stress. And perhaps most importantly, understanding what personal excellence looks like—and feels like—for them.


5 types of transformation fatigue derailing your IT team

Transformation fatigue is the feeling employees face when change efforts consistently fall short of delivering meaningful results. When every new initiative feels like a rerun of the last, teams disengage; it’s not change that wears them down, it’s the lack of meaningful progress. This fatigue is rarely acknowledged, yet its effects are profound. ... Organise around value streams and move from annual plans to more adaptive, incremental delivery. Allow teams to release meaningful work more frequently and see the direct outcomes of their efforts. When value is visible early and often, energy is easier to sustain. Also, leaders can achieve this by shifting from a traditional project-based model to a product-led approach, embedding continuous delivery into the way teams work, rather than treating. ... Frameworks can be helpful, but too often, organisations adopt them in the hope they’ll provide a shortcut to transformation. Instead, these approaches become overly rigid, emphasising process compliance over real outcomes. ... What leaders can do: Focus on mindset, not methodology. Leaders should model adaptive thinking, support experimentation, and promote learning over perfection. Create space for teams to solve problems, rather than follow playbooks that don’t fit their context.


Why app modernization can leave you less secure

In most enterprises, session management is implemented using the capabilities native to the application’s framework. A Java app might use Spring Security. A JavaScript front-end might rely on Node.js middleware. Ruby on Rails handles sessions differently still. Even among apps using the same language or framework, configurations often vary widely across teams, especially in organizations with distributed development or recent acquisitions. This fragmentation creates real-world risks: inconsistent timeout policies, delayed patching, and session revocation gaps Also, there’s the problem of developer turnover: Many legacy applications were developed by teams that are no longer with the organization, and without institutional knowledge or centralized visibility, updating or auditing session behavior becomes a guessing game. ... As one of the original authors of the SAML standard, I’ve seen how identity protocols evolve and where they fall short. When we scoped SAML to focus exclusively on SSO, we knew we were leaving other critical areas (like authorization and user provisioning) out of the equation. That’s why other standards emerged, including SPML, AuthXML, and now efforts like IDQL. The need for identity systems that interoperate securely across clouds isn’t new, it’s just more urgent now. 

Daily Tech Digest - January 31, 2025


Quote for the day:

“If you genuinely want something, don’t wait for it–teach yourself to be impatient.” -- Gurbaksh Chahal


GenAI fueling employee impersonation with biometric spoofs and counterfeit ID fraud

The annual AuthenticID report underlines the surging wave of AI-powered identity fraud, with rising biometric spoofs and counterfeit ID fraud attempts. The 2025 State of Identity Fraud Report also looks at how identity verification tactics and technology innovations are tackling the problem. “In 2024, we saw just how sophisticated fraud has now become: from deepfakes to sophisticated counterfeit IDs, generative AI has changed the identity fraud game,” said Blair Cohen, AuthenticID founder and president. ... “In 2025, businesses should embrace the mentality to ‘think like a hacker’ to combat new cyber threats,” said Chris Borkenhagen, chief digital officer and information security officer at AuthenticID. “Staying ahead of evolving strategies such as AI deepfake-generated documents and biometrics, emerging technologies, and bad actor account takeover tactics are crucial in protecting your business, safeguarding data, and building trust with customers.” ... Face biometric verification company iProov has identified the Philippines as a particular hotspot for digital identity fraud, with corresponding need for financial institutions and consumers to be vigilant. “There is a massive increase at the moment in terms of identity fraud against systems using generative AI in particular and deepfakes,” said iProove chief technology officer Dominic Forrest.


Cyber experts urge proactive data protection strategies

"Every organisation must take proactive measures to protect the critical data it holds," Montel stated. Emphasising foundational security practices, he advised organisations to identify their most valuable information and protect potential attack paths. He noted that simple steps can drastically contribute to overall security. On the consumer front, Montel highlighted the pervasive nature of data collection, reminding individuals of the importance of being discerning about the personal information they share online. "Think before you click," he advised, underscoring the potential of openly shared public information to be exploited by cybercriminals. Adding to the discussion on data resilience, Darren Thomson, Field CTO at Commvault, emphasised the changing landscape of cyber defence and recovery strategies needed by organisations. Thompson pointed out that mere defensive measures are not sufficient; rapid recovery processes are crucial to maintain business resilience in the event of a cyberattack. The concept of a "minimum viable company" is pivotal, where businesses ensure continuity of essential operations even when under attack. With cybercriminal tactics becoming increasingly sophisticated, doing away with reliance solely on traditional backups is necessary. 


Trump Administration Faces Security Balancing Act in Borderless Cyber Landscape

The borderless nature of cyber threats and AI, the scale of worldwide commerce, and the globally interconnected digital ecosystem pose significant challenges that transcend partisanship. As recent experience makes us all too aware, an attack originating in one country, state, sector, or company can spread almost instantaneously, and with devastating impact. Consequently, whatever the ideological preferences of the Administration, from a pragmatic perspective cybersecurity must be a collaborative national (and international) activity, supported by regulations where appropriate. It’s an approach taken in the European Union, whose member states are now subject to the Second Network Information Security Directive (NIS2)—focused on critical national infrastructure and other important sectors—and the financial sector-focused Digital Operational Resilience Act (DORA). Both regulations seek to create a rising tide of cyber resilience that lifts all ships and one of the core elements of both is a focus on reporting and threat intelligence sharing. In-scope organizations are required to implement robust measures to detect cyber attacks, report breaches in a timely way, and, wherever possible, share the information they accumulate on threats, attack vectors, and techniques with the EU’s central cybersecurity agency (ENISA).


Infrastructure as Code: From Imperative to Declarative and Back Again

Today, tools like Terraform CDK (TFCDK) and Pulumi have become popular choices among engineers. These tools allow developers to write IaC using familiar programming languages like Python, TypeScript, or Go. At first glance, this is a return to imperative IaC. However, under the hood, they still generate declarative configurations — such as Terraform plans or CloudFormation templates — that define the desired state of the infrastructure. Why the resurgence of imperative-style interfaces? The answer lies in a broader trend toward improving developer experience (DX), enabling self-service, and enhancing accessibility. Much like the shifts we’re seeing in fields such as platform engineering, these tools are designed to streamline workflows and empower developers to work more effectively. ... The current landscape represents a blending of philosophies. While IaC tools remain fundamentally declarative in managing state and resources, they increasingly incorporate imperative-like interfaces to enhance usability. The move toward imperative-style interfaces isn’t a step backward. Instead, it highlights a broader movement to prioritize developer accessibility and productivity, aligning with the emphasis on streamlined workflows and self-service capabilities.


How to Train AI Dragons to Solve Network Security Problems

We all know AI’s mantra: More data, faster processing, large models and you’re off to the races. But what if a problem is so specific — like network or DDoS security — that it doesn’t have a lot of publicly or privately available data you can use to solve it? As with other AI applications, the quality of the data you feed an AI-based DDoS defense system determines the accuracy and effectiveness of its solutions. To train your AI dragon to defend against DDoS attacks, you need detailed, real-world DDoS traffic data. Since this data is not widely and publicly available, your best option is to work with experts who have access to this data or, even better, have analyzed and used it to train their own AI dragons. To ensure effective DDoS detection, look at real-world, network-specific data and global trends as they apply to the network you want to protect. This global perspective adds valuable context that makes it easier to detect emerging or worldwide threats. ... Predictive AI models shine when it comes to detecting DDoS patterns in real-time. By using machine learning techniques such as time-series analysis, classification and regression, they can recognize patterns of attacks that might be invisible to human analysts. 


How law enforcement agents gain access to encrypted devices

When a mobile device is seized, law enforcement can request the PIN, password, or biometric data from the suspect to access the phone if they believe it contains evidence relevant to an investigation. In England and Wales, if the suspect refuses, the police can give a notice for compliance, and a further refusal is in itself a criminal offence under the Regulation of Investigatory Powers Act (RIPA). “If access is not gained, law enforcement use forensic tools and software to unlock, decrypt, and extract critical digital evidence from a mobile phone or computer,” says James Farrell, an associate at cyber security consultancy CyXcel. “However, there are challenges on newer devices and success can depend on the version of operating system being used.” ... Law enforcement agencies have pressured companies to create “lawful access” solutions, particularly on smartphones, to take Apple as an example. “You also have the co-operation of cloud companies, which if backups are held can sidestep the need to break the encryption of a device all together,” Closed Door Security’s Agnew explains. The security community has long argued against law enforcement backdoors, not least because they create security weaknesses that criminal hackers might exploit. “Despite protests from law enforcement and national security organizations, creating a skeleton key to access encrypted data is never a sensible solution,” CreateFuture’s Watkins argues.


The quantum computing reality check

Major cloud providers have made quantum computing accessible through their platforms, which creates an illusion of readiness for enterprise adoption. However, this accessibility masks a fatal flaw: Most quantum computing applications remain experimental. Indeed, most require deep expertise in quantum physics and specialized programming knowledge. Real-world applications are severely limited, and the costs are astronomical compared to the actual value delivered. ... The timeline to practical quantum computing applications is another sobering reality. Industry experts suggest we’re still 7 to 15 years away from quantum systems capable of handling production workloads. This extended horizon makes it difficult to justify significant investments. Until then, more immediate returns could be realized through existing technologies. ... The industry’s fascination with quantum computing has made companies fear being left behind or, worse, not being part of the “cool kids club”; they want to deliver extraordinary presentations to investors and customers. We tend to jump into new trends too fast because the allure of being part of something exciting and new is just too compelling. I’ve fallen into this trap myself. ... Organizations must balance their excitement for quantum computing with practical considerations about immediate business value and return on investment. I’m optimistic about the potential value in QaaS. 


Digital transformation in banking: Redefining the role of IT-BPM services

IT-BPM services are the engine of digital transformation in banking. They streamline operations through automation technologies like RPA, enhancing efficiency in processes such as customer onboarding and loan approvals. This automation reduces errors and frees up staff for strategic tasks like personalised customer support. By harnessing big data analytics, IT-BPM empowers banks to personalise services, detect fraud, and make informed decisions, ultimately improving both profitability and customer satisfaction. Robust security measures and compliance monitoring are also integral, ensuring the protection of sensitive customer data in the increasingly complex digital landscape. ... IT-BPM services are crucial for creating seamless, multi-channel customer experiences. They enable the development of intuitive platforms, including AI-driven chatbots and mobile apps, providing instant support and convenient financial management. This focus extends to personalised services tailored to individual customer needs and preferences, and a truly integrated omnichannel experience across all banking platforms. Furthermore, IT-BPM fosters agility and innovation by enabling rapid development of new digital products and services and facilitating collaboration with fintech companies.


Revolutionizing data management: Trends driving security, scalability, and governance in 2025

Artificial Intelligence and Machine Learning transform traditional data management paradigms by automating labour-intensive processes and enabling smarter decision-making. In the upcoming years, augmented data management solutions will drive efficiency and accuracy across multiple domains, from data cataloguing to anomaly detection. AI-driven platforms process vast datasets to identify patterns, automating tasks like metadata tagging, schema creation and data lineage mapping. ... In 2025, data masking will not be merely a compliance tool for GDPR, HIPPA, or CCPA; it will be a strategic enabler. With the rise in hybrid and multi-cloud environments, businesses will increasingly need to secure sensitive data across diverse systems. Specific solutions like IBM, K2view, Oracle and Informatica will revolutionize data masking by offering scale-based, real-time, context-aware masking. ... Real-time integration enhances customer experiences through dynamic pricing, instant fraud detection, and personalized recommendations. These capabilities rely on distributed architectures designed to handle diverse data streams efficiently. The focus on real-time integration extends beyond operational improvements. 


Deploying AI at the edge: The security trade-offs and how to manage them

The moment you bring compute nodes into the far edge, you’re automatically exposing a lot of security challenges in your network. Even if you expect them to be “disconnected devices,” they could intermittently connect to transmit data. So, your security footprint is expanded. You must ensure that every piece of the stack you’re deploying at the edge is secure and trustworthy, including the edge device itself. When considering security for edge AI, you have to think about transmitting the trained model, runtime engine, and application from a central location to the edge, opening up the opportunity for a person-in-the-middle attack. ... In military operations, continuous data streams from millions of global sensors generate an overwhelming volume of information. Cloud-based solutions are often inadequate due to storage limitations, processing capacity constraints, and unacceptable latency. Therefore, edge computing is crucial for military applications, enabling immediate responses and real-time decision-making. In commercial settings, many environments lack reliable or affordable connectivity. Edge AI addresses this by enabling local data processing, minimizing the need for constant communication with the cloud. This localized approach enhances security. Instead of transmitting large volumes of raw data, only essential information is sent to the cloud. 


Daily Tech Digest - August 12, 2024

In three or four years, ‘we won’t even talk about AI’

In general, there’s a very positive view of AI in tech. In a lot of other industries, there’s some uncertainty, some trepidation, some curiosity. But part of our pulse survey said about three out of four tech workers are using AI on a daily basis. So, the adoption in this portfolio of companies is higher than most, and I’d also said most employers and workers have a very good idea that AI is going to improve their business and their work. ... “I view AI skills as adjacent, additive skills for most people — aside from really hardcore data scientists and AI engineers. This is how most people will work in the new world. Generally, it depends. Some organizations have built whole, distinct AI organizations. Others have built embedded AI domains in all of their job functions. It really depends. There’s a lot of discussion around whether companies should have a chief AI officer. I’m not sure that’s necessary. I think a lot of those functions are already in place. You do need someone in your organization who has a holistic view of the positive sides of this and the risks associated with this.”


The AI Balancing Act: Innovating While Safeguarding Consumer Privacy

There are two sides to every coin. While AI can further compliance efforts, it can also create new privacy and security challenges. This is particularly true today, amid an ongoing global effort to strengthen data privacy laws. 71% of countries have data privacy legislation, and in recent years, this has evolved to encapsulate AI. In the EU, for instance, approval has been secured from the European Parliament around a specific AI regulatory framework. This framework imposes specific obligations on providers of high-risk AI systems and could ban certain AI-powered applications. The fact is, AI-powered technology is immensely powerful. But, it comes with complex challenges to data privacy compliance. A primary concern here relates to purpose limitation, specifically the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. As AI systems evolve, they may find new ways to utilise data, potentially extending beyond the scope of original disclosure and consent agreement. As such, maintaining transparency in AI operations to ensure accurate and appropriate data use disclosures is critical.


Is biometric authentication still effective?

With the rapid advancement and accessibility of technologies, the efficacy and security of biometric authentication methods are under threat. Fraudsters are using spoofing techniques to replicate or falsify biometric data, such as creating synthetic fingerprints or 3D facial models, to fool sensors, mimic legitimate biometric traits and gain unauthorized access to secured services. ... Unlike traditional biometric authentication, which relies on static physical attributes, behavioral biometrics verify user identity based on unique interaction patterns, such as typing rhythm, mouse movements and touchscreen interactions. This shift is essential because behavioral biometrics offer a more dynamic and adaptive layer of security, making it significantly harder for fraudsters to replicate or mask. ... With data scattered across different systems, it’s challenging to correlate information, connect the dots and identify overarching patterns of bad behavior. A decentralized approach causes businesses to overlook crucial fraud indicators and struggle to respond effectively to emerging threats due to the lack of visibility and coordination among disparate fraud prevention tools.


Practical strategies for mitigating API security risks

Identity and access management is crucial for a complete API security strategy. IAM facilitates efficient user management from creation to deactivation and ensures that only authorized individuals access APIs. IAM enables granular access control, granting permissions based on specific attributes and resources rather than just predefined roles. Integration with security information and event management (SIEM) systems enhances security by providing centralized visibility and enabling better threat detection and response. AI and machine learning are revolutionizing API security by providing sophisticated tools that enhance design, testing, threat detection, and overall governance. These technologies improve the robustness and resilience of APIs, enabling organizations to stay ahead of emerging threats and regulatory changes. As AI evolves, its role in API security will become increasingly vital, offering innovative solutions to the complex challenges of safeguarding digital assets. AI in API security goes beyond the limitations of human or rule-based interventions, enabling advanced pattern recognition and automating security audits and governance for greater defense against evolving threats.


The evolution of the CTO – from tech keeper to strategic leader

CTOs have experienced a huge shift in how they are positioned in the workplace. They are no longer part of a small-medium size team that operates separately from the rest of the business; they are the key to tangible business growth and perhaps one of the most crucial parts of a leadership team. The main duty of CTOs is to maintain – and where available, to modernise – tech, and to decide when something has kicked the bucket and no longer has a purpose. These things require people power, specialist skills and money. Needless to say, the investment in the role is vital. Tech leaders often feel burnt out, or worried that they don’t have the resources and support needed to do their job well. ... The saying goes, “You can never set foot in the same river twice,” and the same is true for leaders in tech – everything evolves from the moment you start working on a project. There is much to appreciate about technology that remains stable and adaptable when changes are necessary during development. Today, innovative CTOs are on the lookout for software solutions that come with the flexibility of making that important U-turns if ever needed.


How AIOps Is Transforming IT Operations Management

IT operations management has become increasingly challenging as networks have become larger and more complex, with the introduction of remote workers and the distribution of applications and workloads across networks. Traditional operations management tools and practices struggle to keep up with the ever-growing volumes of data from multiple sources within complex and varied network environments. AIOps was designed to bring the speed, accuracy and predictive capabilities of AI technology to IT operations. AIOps provides contextually enriched, deep end-to-end, real-time insights that can be proactively acted upon, according to Forrester. AIOps solutions use real-time telemetry, developing patterns and historical operational data to perform real-time assessments of what is happening, whether it has happened before or not, what paths it might take, and what negative effects it might have on business operations. ... A "digitally mature" organization has a much better ROI on the AI investment. But because this is a "rolling target" and not static, an organization's IT infrastructure "must be able to adapt and change," Ramamoorthy said.


The cyber assault on healthcare: What the Change Healthcare breach reveals

Many security leaders report that they don’t have adequate resources to implement the needed security measures because they’re often competing with pricey life-saving medical equipment for the limited funds available to spend, Kim says. Furthermore, he says their complex technology environments can make applying and creating security in depth not only more challenging but more costly, too. That, in turn, makes it less likely for CISOs to get the resources they need. Security teams in healthcare also have more challenges in updating and patching systems, Riggi explains, as the sector’s need for 24/7 availability means organizations can’t easily go offline — if they can go offline at all — to perform needed work. Healthcare security leaders also have a rapidly expanding tech environment to secure, as both more partners and more patients with remote medical devices become part of the sector’s already highly interconnected environment, says Errol S. Weiss, chief security officer at Health-ISAC. Such expansion heightens the challenges, complexities and costs of implementing security controls as well as heightening the risks that a successful attack against one point in that web would impact many others.


Solar Power Installations Worldwide Open to Cloud API Bugs

"The issue we discovered lies in the cloud APIs that connect the hardware with the user," both on Solarman's platform and on Deye Cloud, says Bogdan Botezatu, director of threat Research and reporting at Bitdefender. "These APIs have vulnerable endpoints that allow an unauthorized third party to change settings or otherwise control the inverters and data loggers via the vulnerable Solarman and Deye platforms," he says. Bitdefender, for instance, found that the Solarman platform's /oauth2-s/oauth/token API endpoint would let an attacker generate authorization tokens for any regular or business accounts on the platform. "This means that a malicious user could iterate through all accounts, take over any of them and modify inverter parameters or change how the inverter interacts with the grid," Bitdefender said in its report. The security vendor also found Solarman's API endpoints to be exposing an excessive amount of information — including personally identifiable information — about organizations and individuals on the platform. 


6 hard truths of generative AI in the enterprise

“Not a week goes by without another new tool that is mind-blowing in its abilities and potential future impact,’’ agrees David Higginson, chief innovation officer and executive vice president of Phoenix Children’s Hospital. But right now genAI “can really only be executed by a small number of technology giants rather than being tinkered with at a local skunkworks level within a healthcare organization,’’ he says. “Therefore, it feels as if we are in a bit of a paused state waiting for established vendors to deliver mature solutions that can provide the tangible value we all anticipated.” ... The fundamental barriers to adopting genAI are the scarcity and cost of the hardware, power, and data needed to train models, Higginson says. “With such scarcity comes the need to prioritize which solutions have the broadest appeal to the population and can generate the most long-term revenue,’’ he says. ... While research and development continue to push the needle on what genAI can do, “we know that data is a critical aspect to enabling AI solutions and we also recognize that many organizations are uncovering the work it will take to build the right data foundations to support scaled AI deployments,” says Deloitte’s Rowan.


Investing in Capacity to Adapt to Surprises in Software-Reliant Businesses

A well-known and contrarian adage in the Resilience Engineering community is that Murphy's Law - "anything that can go wrong, will" - is wrong. What can go wrong almost never does, but we don't tend to notice that. People engaged in modern work (not just software engineers) are continually adapting what they’re doing, according to the context they find themselves in. They’re able to avoid problems in most everything they do, almost all of the time. When things do go "sideways" and an issue crops up they need to handle or rectify, they are able to adapt to these situations due to the expertise they have. Research in decision-making described in the article Seeing the invisible: Perceptual-cognitive aspects of expertise by Klein, G. A., & Hoffman, R. R. (2020) reveals that while demonstrations of expertise play out in time-pressured and high-consequence events (like incident response), expertise comes from experience with facing varying situations involved with "ordinary" everyday work. It is "hidden" because the speed and ease with which experts do ordinary work contrasts with how sophisticated the work is. 



Quote for the day:

"True leadership must be for the benefit of the followers, not the enrichment of the leaders." -- Robert Townsend

Daily Tech Digest - August 10, 2024

What to Look for in a Network Detection and Response (NDR) Product

NDR's practical limitation lies in its focus on the network layer, Orr says. Enterprises that have invested in NDR also need to address detection and response for multiple security layers, ranging from cloud workloads to endpoints and from servers to networks. "This integrated approach to cybersecurity is commonly referred to as Extended Detection and Response (XDR), or Managed Detection and Response (MDR) when provided by a managed service provider," he explains. Features such as Intrusion Prevention Systems (IPS), which are typically included with firewalls, are not as critical because they are already delivered via other vendors, Tadmor says. "Similarly, Endpoint Detection and Response (EDR) is being merged into the broader XDR (Extended Detection and Response) market, which includes EDR, NDR, and Identity Threat Detection and Response (ITDR), reducing the standalone importance of EDR in NDR solutions." ... Look for vendors that are focused on fast, accurate detection and response, advises Reade Taylor, an ex-IBM Internet security systems engineer, now the technology leader of managed services provider Cyber Command. 


AI In Business: Elevating CX & Energising Employees

Using AI in CX certainly eases business operations, but it’s ultimately a win for the customer too. As AI collects, analyses, and learns from large volumes of data, it delivers new worlds of actionable insights that empower businesses to get personal with their customer journeys. In the past years, businesses have tried their best to personalise the customer experience – but working with a handful of generic personas only gets you so far. Today’s AI, however, has the power to unlock next-level insights that help businesses discover customers’ expectations, wants, and needs so they can create individualised experiences on a 1-2-1 level. ... In human resources, AI further presents opportunities to help employees. For example, AI can elevate standard on-the-job training by creating personalised learning and development programmes for employees. Meanwhile, AI can also help job hunters find opportunities they may have overlooked. For example, far too many jobseekers have valuable and transferable skills but lack the experience in the right business vertical to land a job. According to NIESR, 63% of UK graduates are mismatched in this way. 


The benefits and pitfalls of platform engineering

The first step of platform engineering is to reduce tool sprawl by making clear what tools should make up the internal developer platform. The next step is to reduce context-switching between these tools which can result in significant time loss. By using a portal as a hub, users can find all of the information they need in one place without switching tabs constantly. This improves the developer experience and enhances productivity. ... In terms of scale, platform engineering can help an organization to better understand their services, workloads, traffic and APIs and manage them. This can come through auto-scaling rules, load balancing traffic, using TTL in self-service actions, and an API catalog. ... Often, as more platform tools are added and as more microservices are introduced - things become difficult to track - and this leads to an increase in deploy failures, longer feature development/discovery times, and general fatigue and developer dissatisfaction because of the unpredictably of bouncing around different platform tools to perform their work. There needs to be a way to track what’s happening throughout the SDLC.Adoption - how (and is it possible) to force developers to change the way they work


The irreversible footprint: Biometric data and the urgent need for right to be forgotten

The absence of clear definitions and categorisations of biometric data within current legislation highlights the need for comprehensive frameworks that specifically define rules governing its collection, storage, processing and deletion. Established legislation like the Information Technology Act, which were supplemented by subsequent ‘Rules’ for various digital governance aspects, can be used as a precedent. For instance, the 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules were introduced to establish a robust complaint mechanism for social media and OTT platforms, addressing inadequacies in the Parent Act. To close the current regulatory loopholes, a separate set of rules governing biometric data under the Digital Personal Data Protection Act, 2023 should be considered. ... The ‘right to be forgotten’ must be a basic element of it, recognising people's sovereignty over their biometric data. Such focused regulations would not just bolster the safeguarding of biometric information, but also ensure compliance and accountability among entities handling sensitive data. Ultimately, this approach aims to cultivate a more resilient and privacy-conscious ecosystem within our dynamic digital landscape.


6 IT risk assessment frameworks compared

ISACA says implementation of COBIT is flexible, enabling organizations to customize their governance strategy via the framework. “COBIT, through its insatiable focus on governance and management of enterprise IT, aligns the IT infrastructure to business goals and maintains strategic advantage,” says Lucas Botzen, CEO at Rivermate, a provider of remote workforce and payroll services. “For governance and management of corporate IT, COBIT is a must,” says ... FAIR’s quantitative cyber risk assessment is applicable across sectors, and now emphasizes supply chain risk management and securing technologies such as internet of things (IoT) and artificial intelligence (AI), Shaw University’s Lewis says. Because it uses a quantitative risk management method, FAIR helps organizations determine how risks will affect their finances, Fuel Logic’s Vancil says. “This method lets you choose where to put your security money and how to balance risk and return best.” ... Conformity with ISO/IEC 27001 means an organization has put in place a system to manage risks related to the security of data owned or handled by the organization. The standard, “gives you a structured way to handle private company data and keep it safe,” Vancil says. 


Why is server cooling so important in the data center industry?

AI and other HPC sectors are continuing to drive up the power density of rack-mount server systems. This increased computer means increased power draw, which leads to increased heat generation. Removing that heat from the server systems in turn requires more power for high CFM (cubic feet per minute) fans. Liquid cooling technologies, including rack-level-cooling and immersion, can improve the efficiency of the heat removal from server systems, requiring less powerful fans. In turn, this can reduce the overall power budget of a rack of servers. When extrapolating this out across large sections of a data center footprint, the savings can add up significantly. When you consider some of the latest Nvidia rack offerings require 40KW or more, you can start to see how the power requirements are shifting to the extreme. For reference, it’s not uncommon for a lot of electronic trading co-locations to only offer 6-12KW racks, which are sometimes operated half-empty due to the servers requiring more power draw than the rack can provide. These trends are going to force data centers to adopt any technology that can reduce the power burden on not only their own infrastructure but also the local infrastructure that supplies them.


Cutting the High Cost of Testing Microservices

Given the high costs associated with environment duplication, it is worth considering alternative strategies. One approach is to use dynamic environment provisioning, where environments are created on demand and torn down when no longer needed. This method can help optimize resource utilization and reduce costs by avoiding the need for permanently duplicated setups. This can keep costs down but still comes with the trade-off of sending some testing to staging anyway. That’s because there are shortcuts that we must take to spin up these dynamic environments like using mocks for third-party services. This may put us back at square one in terms of testing reliability, that is how well our tests reflect what will happen in production. At this point, it’s reasonable to consider alternative methods that use technical fixes to make staging and other near-to-production environments easier to test on. ... While duplicating environments might seem like a practical solution for ensuring consistency in microservices, the infrastructure costs involved can be significant. By exploring alternative strategies such as dynamic provisioning and request isolation, organizations can better manage their resources and mitigate the financial impact of maintaining multiple environments.


The Cybersecurity Workforce Has an Immigration Problem

Creating a skilled immigration pathway for cybersecurity will require new policies. Chief among them is a mechanism to verify that applicants have relevant cybersecurity skills. One approach is allowing people to identify themselves by bringing forth previously unidentified bugs. This strategy is a natural way to prove aptitude and has the additional benefit of requiring no formal expertise or expensive testing. However, it would also require safe harbor provisions to protect individuals from prosecution under the Computer Fraud and Abuse Act. ... The West’s adversaries may also play a counterintuitive role in a cybersecurity workforce solution. Recent work from Eugenio Benincasa at ETH Zurich highlights the strength of China’s cybersecurity workforce. How many Chinese hackers might be tempted to immigrate to the West, if invited, for better pay and greater political freedom? While politically sensitive, a policy that allows foreign-trained cybersecurity experts to immigrate to the US could enhance the West’s workforce while depriving its adversaries of offensive talent. At the same time, such immigration programs must be measured and targeted to avoid adding tension to a world in which geopolitical conflict is already rising. 


Cross-Cloud: The Next Evolution in Cloud Computing?

The key difference between cross-cloud and multicloud is that cross-cloud spreads the same workload across-clouds. In contrast, multicloud simply means using more than one public cloud at the same time — with one cloud hosting some workloads and other clouds hosting other workloads. ... That said, in other respects, cross-cloud and multicloud offer similar benefits — although cross-cloud allows organizations to double down on some of those benefits. For instance, a multicloud strategy can help reduce cloud costs by allowing you to pick and choose from among multiple clouds for different types of workloads, depending on which cloud offers the best pricing for different types of services. One cloud might offer more cost-effective virtual servers, for example, while another has cheaper object storage. As a result, you use one cloud to host VM-based workloads and another to store data. You can do something similar with cross-cloud, but in a more granular way. Instead of having to devote an entire workload to one cloud or another depending on which cloud offers the best overall pricing for that type of workload, you can run some parts of the workload on one cloud and others on a different cloud. 


Will We Survive The Transitive Vulnerability Locusts

The issue today is that modern software development resembles constructing with Legos, where applications are built using numerous open-source dependencies — no one writes frameworks from scratch anymore. With each dependency comes the very real probability of inherited vulnerabilities. When unique applications are then built on top of those frameworks, it turns into a patchwork of potential vulnerability dependencies that are stitched together with our own proprietary code, without any mitigation of the existing vulnerabilities. ... With a proposed solution, it would be easy to conclude that we have fixed the problem. Given this vulnerability, we could just patch it and be secure, right? But after we updated the manifest file, and theoretically removed the transitive vulnerability, it still showed up in the SCA scan. After two tries at remediating the problem, we recognized that two variable versions were present. Using the SCA scan, we determined the root cause of the vulnerability had been imported and used. This is a fine manual fix, but reproducing this process manually at scale is near-impossible. We therefore decided to test whether we could group CVE behavior by their common weakness enumeration (CWE) classification. 



Quote for the day:

"You are the only one who can use your ability. It is an awesome responsibility." -- Zig Ziglar

Daily Tech Digest - July 01, 2024

The dangers of voice fraud: We can’t detect what we can’t see

The inherent imperfections in audio offer a veil of anonymity to voice manipulations. A slightly robotic tone or a static-laden voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voice fraud not only effective but also remarkably insidious. Imagine receiving a phone call from a loved one’s number telling you they are in trouble and asking for help. The voice might sound a bit off, but you attribute this to the wind or a bad line. The emotional urgency of the call might compel you to act before you think to verify its authenticity. Herein lies the danger: Voice fraud preys on our readiness to ignore minor audio discrepancies, which are commonplace in everyday phone use. Video, on the other hand, provides visual cues. There are clear giveaways in small details like hairlines or facial expressions that even the most sophisticated fraudsters have not been able to get past the human eye. On a voice call, those warnings are not available. That’s one reason most mobile operators, including T-Mobile, Verizon and others, make free services available to block — or at least identify and warn of — suspected scam calls.


Provider or partner? IT leaders rethink vendor relationships for value

Vendors achieve partner status in McDaniel’s eyes by consistently demonstrating accountability and integrity; getting ahead of potential issues to ensure there’s no interruptions or problems with the provided products or services; and understanding his operations and objectives. ... McDaniel, other CIOs, and CIO consultants agree that IT leaders don’t need to cultivate partnerships with every vendor; many, if not most, can remain as straight-out suppliers, where the relationship is strictly transactional, fixed-fee, or fee-for-service based. That’s not to suggest those relationships can’t be chummy, but a good personal rapport between the IT team and the supplier’s team is not what partnership is about. A provider-turned-partner is one that gets to know the CIO’s vision and brings to the table ways to get there together, Bouryng says. ... As such, a true partner is also willing to say no to proposed work that could take the pair down an unproductive path. It’s a sign, Bouryng says, that the vendor is more interested in reaching a successful outcome than merely scheduling work to do.


In the AI era, data is gold. And these companies are striking it rich

AI vendors have, sometimes controversially, made deals with organizations like news publishers, social media companies, and photo banks to license data for building general-purpose AI models. But businesses can also benefit from using their own data to train and enhance AI to assist employees and customers. Examples of source material can include sales email threads, historical financial reports, geographic data, product images, legal documents, company web forum posts, and recordings of customer service calls. “The amount of knowledge—actionable information and content—that those sources contain, and the applications you can build on top of them, is really just mindboggling,” says Edo Liberty, founder and CEO of Pinecone, which builds vector database software. Vector databases store documents or other files as numeric representations that can be readily mathematically compared to one another. That’s used to quickly surface relevant material in searches, group together similar files, and feed recommendations of content or products based on past interests. 


Machine Vision: The Key To Unleashing Automation's Full Potential

Machine vision is a class of technologies that process information from visual inputs such as images, documents, computer screens, videos and more. Its value in automation lies in its ability to capture and process large quantities of documents, images and video quickly and efficiently in quantities and speeds far in excess of human capability. ... Machine vision based technologies are even becoming central to the creation of automations themselves. For example, instead of relying on human workers to describe processes that are being automated when designing automations, recordings of the process to be automated are created and then machine vision software, combined with other technologies, is used to capture the process end-to-end and then provide the input to automating a lot of the work needed to program the digital workers (bots). ... Machine vision is integral to maximizing the impact of advanced automation technologies on business operations and paving the way for increased capabilities in the automation space.


Put away your credit cards — soon you might be paying with your face

Biometric purchases using facial recognition are beginning to gain some traction. The restaurant CaliExpress by Flippy, a fully automated fast-food restaurant, is an early adopter. Whole Food stores offer pay-by-palm, an alternative biometric to facial recognition. Given that they are already using biometrics, facial recognition is likely to be available in their stores at some point in the future. ... Just as credit and debit cards have overtaken cash as the dominant means to make purchases, biometrics like facial recognition could eventually become the dominant way to make purchases. There will however be actual costs during such a transition, which will largely be absorbed by consumers in higher prices. The technology software and hardware required to implement such systems will be costly, pushing it out of reach for many small- and medium-size businesses. However, as facial recognition systems become more efficient and reliable, and losses from theft are reduced, an equilibrium will be achieved that will make such additional costs more modest and manageable to absorb.


Technologists must be ready to seize new opportunities

For technologists, this new dynamic represents a profound (and daunting) change. They’re being asked to report on application performance in a more business-focussed, strategic way and to engage in conversations around experience at a business level. They’re operating outside their comfort zone, far beyond the technical reporting and discussions they’ve previously encountered. Of course, technologists are used to rising to a challenge and pivoting to meet the changing needs of their organisations and their senior leaders. We saw this during the pandemic, many will (rightly) be excited about the opportunity to expand their skills and knowledge, and to elevate their standing within their organisations. The challenge that many technologists face, however, is that they currently don’t have the tools and insights they need to operate in a strategic manner. Many don’t have full visibility across their hybrid environments and they’re struggling to manage and optimise application availability, performance and security in an effective and sustainable manner. They can’t easily detect issues, and even when they do, it is incredibly difficult to quickly understand root causes and dependencies in order to fix issues before they impact end user experience. 


Vulnerability management empowered by AI

Using AI will take vulnerability management to the next level. AI not only reduces analysis time but also effectively identifies threats. ... AI-driven systems can identify patterns and anomalies that signify potential vulnerabilities or attacks. Converting the logs into data and charts will make analysis simpler and quicker. Incidents should be identified based on the security risk, and notification should take place for immediate action. Self-learning is another area where AI can be trained with data. This will enable AI to be up-to-date on the changing environment and capable of addressing new and emerging threats. AI will identify high-risk threats and previously unseen threats. Implementing AI requires iterations to train the model, which may be time-consuming. But over time, it becomes easier to identify threats and flaws. AI-driven platforms constantly gather insights from data, adjusting to shifting landscapes and emerging risks. As they progress, they enhance their precision and efficacy in pinpointing weaknesses and offering practical guidance.


Why every company needs a DDoS response plan

Given the rising number of DDoS attacks each year and the reality that DDoS attacks are frequently used in more sophisticated hacking attempts to apply maximum pressure on victims, a DDoS response plan should be included in every company’s cybersecurity tool kit. After all, it’s not just a temporary lack of access to a website or application that is at risk. A business’s failure to withstand a DDoS attack and rapidly recover can result in loss of revenue, compliance failures, and impacts on brand reputation and public perception. Successful handling of a DDoS attack depends entirely on a company’s preparedness and execution of existing plans. Like any business continuity strategy, a DDoS response plan should be a living document that is tested and refined over the years. It should, at the highest level, consist of five stages, including preparation, detection, classification, reaction, and postmortem reflection. Each phase informs the next, and the cycle improves with each iteration.


Reduce security risk with 3 edge-securing steps

Over the past several years web-based SSL VPNs have been targeted and used to gain remote access. You may even want to consider evaluating how your firm allows remote access and how often your VPN solution has been attacked or at risk. ... “The severity of the vulnerabilities and the repeated exploitation of this type of vulnerability by actors means that NCSC recommends replacing solutions for secure remote access that use SSL/TLS with more secure alternatives,” the authority says. “The NCSC recommends internet protocol security (IPsec) with internet key exchange (IKEv2). Other countries’ authorities have recommended the same.” ... Pay extra attention to how credentials that need to be accessed are protected from unauthorized access. Ensure that you use best practice processes to secure passwords and ensure that each user has appropriate passwords and access accordingly. ... When using cloud services, you need to ensure that only those vendors you trust or that you have thoroughly vetted have access to your cloud services. 

The real key to machine learning success is something that is mostly missing from genAI: the constant tuning of the model. “In ML and AI engineering,” Shankar writes, “teams often expect too high of accuracy or alignment with their expectations from an AI application right after it’s launched, and often don’t build out the infrastructure to continually inspect data, incorporate new tests, and improve the end-to-end system.” It’s all the work that happens before and after the prompt, in other words, that delivers success. For genAI applications, partly because of how fast it is to get started, much of this discipline is lost. ... As with software development, where the hardest work isn’t coding but rather figuring out which code to write, the hardest thing in AI is figuring out how or if to apply AI. When simple rules need to yield to more complicated rules, Valdarrama suggests switching to a simple model. Note the continued stress on “simple.” As he says, “simplicity always wins” and should dictate decisions until more complicated models are absolutely necessary.



Quote for the day:

“The vision must be followed by the venture. It is not enough to stare up the steps - we must step up the stairs.” -- Vance Havner