Daily Tech Digest - January 06, 2025

Should States Ban Mandatory Human Microchip Implants?

“U.S. states are increasingly enacting legislation to pre-emptively ban employers from forcing workers to be ‘microchipped,’ which entails having a subdermal chip surgically inserted between one’s thumb and index finger," wrote the authors of the report. "Internationally, more than 50,000 people have elected to receive microchip implants to serve as their swipe keys, credit cards, and means to instantaneously share social media information. This technology is especially popular in Sweden, where chip implants are more widely accepted to use for gym access, e-tickets on transit systems, and to store emergency contact information.” ... “California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision," Singularity Hub wrote. "In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.” That same piece quotes Alan Mardinly, who is director of biology at Science Corporation, as saying that the advantages of a biohybrid implant are that it "can dramatically change the scaling laws of how many neuros you can interface with versus how much damage you do to the brain."


AI revolution drives demand for specialized chips, reshaping global markets

There’s now a shift toward smaller AI models that only use internal corporate data, allowing for more secure and customizable genAI applications and AI agents. At the same time, Edge AI is taking hold, because it allows AI processing to happen on devices (including PCs, smartphones, vehicles and IoT devices), reducing reliance on cloud infrastructure and spurring demand for efficient, low-power chips. “The challenge is if you’re going to bring AI to the masses, you’re going to have to change the way you architect your solution; I think this is where Nvidia will be challenged because you can’t use a big, complex GPU to address endpoints,” said Mario Morales, a group vice president at research firm IDC. “So, there’s going to be an opportunity for new companies to come in — companies like Qualcomm, ST Micro, Renesas, Ambarella and all these companies that have a lot of the technology, but now it’ll be about how to use it. ... Enterprises and other organizations are also shifting their focus from single AI models to multimodal AI, or LLMs capable of processing and integrating multiple types of data or “modalities,” such as text, images, audio, video, and sensory input. The input from diverse resources creates a more comprehensive understanding of that data and enhances performance across tasks.


How to Address an Overlooked Aspect of Identity Security: Non-human Identities

Compromised identities and credentials are the No. 1 tactic for cyber threat actors and ransomware campaigns to break into organizational networks and spread and move laterally. Identity is the most vulnerable element in an organization’s attack surface because there is a significant misperception around what identity infrastructure (IDP, Okta, and other IT solutions) and identity security providers (PAM, MFA, etc.) can protect. Each solution only protects the silo that it is set up to secure, not an organization’s complete identity landscape, including human and non-human identities (NHIs), privileged and non-privileged users, on-prem and cloud environments, IT and OT infrastructure, and many other areas that go unmanaged and unprotected. ... Most organizations use a combination of on-prem management tools, a mix of one or more cloud identity providers (IdPs), and a handful of identity solutions (PAM, IGA) to secure identities. But each tool operates in a silo, leaving gaps and blind spots that cause increased attacks and blind spots. 8 out of 10 organizations cannot prevent the misuse of service accounts in real-time due to visibility and security being sporadic or missing. NHIs fly under the radar as security and identity teams sometimes don’t even know they exist. 


Version Control in Agile: Best Practices for Teams

With multiple developers working on different features, fixes, or updates simultaneously, it’s easy for code to overlap or conflict without clear guidelines. Having a structured branching approach prevents confusion and minimizes the risk of one developer’s work interfering with another’s. ... One of the cornerstones of good version control is making small, frequent commits. In Agile development, progress happens in iterations, and version control should follow that same mindset. Large, infrequent commits can cause headaches when it’s time to merge, increasing the chances of conflicts and making it harder to pinpoint the source of issues. Small, regular commits, on the other hand, make it easier to track changes, test new functionality, and resolve conflicts early before they grow into bigger problems. ... An organized repository is crucial to maintaining productivity. Over time, it’s easy for the repository to become cluttered with outdated branches, unnecessary files, or poorly named commits. This clutter slows down development, making it harder for team members to navigate and find what they need. Teams should regularly review their repositories and remove unused branches or files that are no longer relevant. 


Abusing MLOps platforms to compromise ML models and enterprise data lakes

Machine learning operations (MLOps) is the practice of deploying and maintaining ML models in a secure, efficient and reliable way. The goal of MLOps is to provide a consistent and automated process to be able to rapidly get an ML model into production for use by ML technologies. ... There are several well-known attacks that can be performed against the MLOps lifecycle to affect the confidentiality, integrity and availability of ML models and associated data. However, performing these attacks against an MLOps platform using stolen credentials has not been covered in public security research. ... Data poisoning: This attack involves an attacker having access to the raw data being used in the “Design” phase of the MLOps lifecycle to include attacker-provided data or being able to directly modify a training dataset. The goal of a data poisoning attack is to be able to influence the data that is being trained in an ML model and eventually deployed to production. ... Model extraction attacks involve the ability of an attacker to steal a trained ML model that is deployed in production. An attacker could use a stolen model to extract sensitive training data such as the training weights used, or to use the predictive capabilities used in the model for their own financial gain. 


Get Going With GitOps

GitOps implementations have a significant impact on infrastructure automation by providing a standardized, repeatable process for managing infrastructure as code, Rose says. The approach allows faster, more reliable deployments and simplifies the maintenance of infrastructure consistency across diverse environments, from development to production. "By treating infrastructure configurations as versioned artifacts in Git, GitOps brings the same level of control and automation to infrastructure that developers have enjoyed with application code." ... GitOps' primary benefit is its ability to enable peer review for configuration changes, Peele says. "It fosters collaboration and improves the quality of application deployment." He adds that it also empowers developers -- even those without prior operations experience -- to control application deployment, making the process more efficient and streamlined. Another benefit is GitOps' ability to allow teams to push minimum viable changes more easily, thanks to faster and more frequent deployments, says Siri Varma Vegiraju, a Microsoft software engineer. "Using this strategy allows teams to deploy multiple times a day and quickly revert changes if issues arise," he explains via email. 


Balancing proprietary and open-source tools in cyber threat research

First, it is important to assess the requirements of an organization by identifying the capabilities needed, such as threat intelligence platforms or malware analysis tools. Next, evaluating open-source tools which can be cost-effective and customizable, but may require community support and frequent updates. In contrast, proprietary tools could offer advanced features, dedicated support, and better integration with other products. Finally, think about scalability and flexibility, as future growth may necessitate scalable solutions. ... The technology is not magic, but it is a powerful tool to speed up processes and bolster security procedures while also reducing the gap between advanced and junior analysts. However, as of today, the technology still requires verification and validation. Globally, the need for security experts with a dual skill set in security and AI will be in high demand. Because the adoption of generative AI systems increases, we need people who understand these technologies because threat actors are also learning. ... If a CISO needs to evaluate effectiveness of these tools, they first need to understand their needs and pain points and then seek guidance from experts. Adopting generative AI security solutions just because it is the latest trend is not the right approach.


Get your IT infrastructure AI-ready

Artificial intelligence adoption is a challenge many CIOs grapple with as they look to the future. Before jumping in, their teams must possess practical knowledge, skills, and resources to implement AI effectively. ... AI implementation is costly and the training of AI models requires a substantial investment. "To realize the potential, you have to pay attention to what it's going to take to get it done, how much it's going to cost, and make sure you're getting a benefit," Ramaswami said. "And then you have to go get it done." GenAI has rapidly transformed from an experimental technology to an essential business tool, with adoption rates more than doubling in 2024, according to a recent study by AI at Wharton ... According to Donahue, IT teams are exploring three key elements: choosing language models, leveraging AI from cloud services, and building a hybrid multicloud operating model to get the best of on-premise and public cloud services. "We're finding that very, very, very few people will build their own language model," he said. "That's because building a language model in-house is like building a car in the garage out of spare parts." Companies look to cloud-based language models, but must scrutinize security and governance capabilities while controlling cost over time. 


What is an EPMO? Your organization’s strategy navigator

The key is to ensure the entire strategy lifecycle is set up for success rather than endlessly iterating to perfect strategy execution. Without properly defining, governing, and prioritizing initiatives upfront, even the best delivery teams will struggle to achieve business goals in a way that drives the right return for the organization’s investment. For most organizations, there’s more than one gap preventing desired results. ... The EPMO’s job is to strip away unnecessary complexity and create frameworks that empower teams to deliver faster, more effectively, and with greater focus. PMO leaders should ask how this process helps to hit business goals faster. So by eliminating redundant meetings and scaling governance to match project size and risk, delivery timelines can shorten. This kind of targeted adjustment keeps momentum high without sacrificing quality or control. ... For an EPMO to be effective, ideally it needs to report directly to the C-suite. This matters because proximity equals influence. When the EPMO has visibility at the top, it can drive alignment across departments, break down silos, drive accountability, and ensure initiatives stay connected to overall business objectives serving as the strategy navigator for the C-suite.


Data Center Hardware in 2025: What’s Changing and Why It Matters

DPUs can handle tasks like network traffic management, which would otherwise fall to CPUs. In this way, DPUs reduce the load placed on CPUs, ultimately making greater computing capacity available to applications. DPUs have been around for several years, but they’ve become particularly important as a way of boosting the performance of resource-hungry workloads, like AI training, by completing AI accelerators. This is why I think DPUs are about to have their moment. ... Recent events have underscored the risk of security threats linked to physical hardware devices. And while I doubt anyone is currently plotting to blow up data centers by placing secret bombs inside servers, I do suspect there are threat actors out there vying to do things like plant malicious firmware on servers as a way of creating backdoors that they can use to hack into data centers. For this reason, I think we’ll see an increased focus in 2025 on validating the origins of data center hardware and ensuring that no unauthorized parties had access to equipment during the manufacturing and shipping processes. Traditional security controls will remain important, too, but I’m betting on hardware security becoming a more intense area of concern in the year ahead.



Quote for the day:

"Nothing in the world is more common than unsuccessful people with talent." -- Anonymous

Daily Tech Digest - January 05, 2025

Phantom data centers: What they are (or aren’t) and why they’re hampering the true promise of AI

Fake data centers represent an urgent bottleneck in scaling data infrastructure to keep up with compute demand. This emerging phenomenon is preventing capital from flowing where it actually needs to. Any enterprise that can help solve this problem — perhaps leveraging AI to solve a problem created by AI — will have a significant edge. ... As utilities struggle to sort fact from fiction, the grid itself becomes a bottleneck. McKinsey recently estimated that global data center demand could reach up to 152 gigawatts by 2030, adding 250 terawatt-hours of new electricity demand. In the U.S., data centers alone could account for 8% of total power demand by 2030, a staggering figure considering how little demand has grown in the last two decades. Yet, the grid is not ready for this influx. Interconnection and transmission issues are rampant, with estimates suggesting the U.S. could run out of power capacity by 2027 to 2029 if alternative solutions aren’t found. Developers are increasingly turning to on-site generation like gas turbines or microgrids to avoid the interconnection bottleneck, but these stopgaps only serve to highlight the grid’s limitations.


Understanding And Preparing For The 7 Levels Of AI Agents

Task-specialized agents excel in somewhat narrow domains, often outperforming humans in specific tasks by collaborating with domain experts to complete well-defined activities. These agents are the backbone of many modern AI applications, from fraud detection algorithms to medical imaging systems. Their origins trace back to the expert systems of the 1970s and 1980s, like MYCIN, a rule-based system for diagnosing infections. ... Context-aware agents distinguish themselves by their ability to handle ambiguity, dynamic scenarios, and synthesize a variety of complex inputs. These agents analyze historical data, real-time streams, and unstructured information to adapt and respond intelligently, even in unpredictable scenarios. ... The idea of self-reflective agents ventures into speculative territory. These systems would be capable of introspection and self-improvement. The concept has roots in philosophical discussions about consciousness, first introduced by Alan Turing in his early work on machine intelligence and later explored by thinkers like David Chalmers. Self-reflective agents would analyze their own decision-making processes and refine their algorithms autonomously, much like a human reflects on past actions to improve future behavior.


The 7 Key Software Testing Principles: Why They Matter and How They Work in Practice

Identifying defects early in the software development lifecycle is critical because the cost and effort to fix issues grow exponentially as development progresses. Early testing not only minimizes these risks but also streamlines the development process by addressing potential problems when they are most manageable and least expensive. This proactive approach saves time, reduces costs, and ensures a smoother path to delivering high-quality software. ... The pesticide paradox suggests that repeatedly running the same set of tests will not uncover new or previously unknown defects. To continue identifying issues effectively, test methodologies must evolve by incorporating new tests, updating existing test cases, or modifying test steps. This ongoing refinement ensures that testing remains relevant and capable of discovering previously hidden problems. ... Test strategies must be tailored to the specific context of the software being tested. The requirements for different types of software—such as a mobile app, a high-transaction e-commerce website, or a business-critical enterprise application—vary significantly. As a result, testing methodologies should be customized to address the unique needs of each type of application, ensuring that testing is both effective and relevant to the software's intended use and environment.


This Year, RISC-V Laptops Really Arrive

DeepComputing is now working in partnership with Framework, a laptop maker founded in 2019 with the mission to “fix consumer electronics,” as it’s put on the company’s website. Framework sells modular, user-repairable laptops that owners can keep indefinitely, upgrading parts (including those that can’t usually be replaced, like the mainboard and display) over time. “The Framework laptop mainboard is a place for board developers to come in and create their own,” says Patel. The company hopes its laptops can accelerate the adoption of open-source hardware by offering a platform where board makers can “deliver system-level solutions,” Patel adds, without the need to design their own laptop in-house. ... The DeepComputing DC-Roma II laptop marked a major milestone for open source computing, and not just because it shipped with Ubuntu installed. It was the first RISC-V laptop to receive widespread media coverage, especially on YouTube, where video reviews of the DC-Roma II  collectively received more than a million views. ... Balaji Baktha, Ventana’s founder and CEO, is adamant that RISC-V chips will go toe-to-toe with x86 and Arm across a variety of products. “There’s nothing that is ISA specific that determines if you can make something high performance, or not,” he says. “It’s the implementation of the microarchitecture that matters.”


The cloud architecture renaissance of 2025

First, get your house in order. The next three to six months should be spent deep-diving into current cloud spending and utilization patterns. I’m talking about actual numbers, not the sanitized versions you show executives. Map out your AI and machine learning (ML) workload projections because, trust me, they will explode beyond your current estimates. While you’re at it, identify which workloads in your public cloud deployments are bleeding money—you’ll be shocked at what you find. Next, develop a workload placement strategy that makes sense. Consider data gravity, performance requirements, and regulatory constraints. This isn’t about following the latest trend; it’s about making decisions that align with business realities. Create explicit ROI models for your hybrid and private cloud investments. Now, let’s talk about the technical architecture. The organizational piece is critical, and most enterprises get it wrong. Establish a Cloud Economics Office that combines infrastructure specialists, data scientists, financial analysts, and security experts. This is not just another IT team; it is a business function that must drive real value. Investment priorities need to shift, too. Focus on automated orchestration tools, cloud management platforms, and data fabric solutions.


How datacenters use water and why kicking the habit is nearly impossible

While dry coolers and chillers may not consume water onsite, they aren't without compromise. These technologies consume substantially more power from the local grid and potentially result in higher indirect water consumption. According to the US Energy Information Administration, the US sources roughly 89 percent of its power from natural gas, nuclear, and coal plants. Many of these plants employ steam turbines to generate power, which consumes a lot of water in the process. Ironically, while evaporative coolers are why datacenters consume so much water onsite, the same technology is commonly employed to reduce the amount of water lost to steam. Even still the amount of water consumed through energy generation far exceeds that of modern datacenters. ... Understanding that datacenters are, with few exceptions, always going to use some amount of water, there are still plenty of ways operators are looking to reduce direct and indirect consumption. One of the most obvious is matching water flow rates to facility load and utilizing free cooling wherever possible. Using a combination of sensors and software automation to monitor pumps and filters at facilities utilizing evaporative cooling, Sharp says Digital Realty has observed a 15 percent reduction in overall water usage.


Data centres in space: they’re a brilliant idea, but a herculean challenge

Data centres beyond Earth’s atmosphere would have access to continuous solar energy and could be naturally cooled by the vacuum of space. Away from terrestrial issues like planning permission, such facilities could be rapidly deployed and expanded as the demand for more data keeps increasing. It may sound like something from a sci-fi novel, but this concept has been gaining more attention as space technology has advanced and the need for sustainable and scalable data centres has become apparent. ... Space weather, such as solar flares could disrupt operations, while collisions with debris are a major worry – rather offsetting the fact that space-based data centres don’t have to fear earthquakes or floods. Advanced shielding could protect against things like radiation and micrometeoroids, but it will probably only do so much – particularly as Earth’s orbit becomes ever more crowded. To fix damaged facilities, advances in robotics and automation will of course help, but remote maintenance may not be able to address all issues. Sending repair crews remains a very complex and costly affair, and though the falling cost of space launches will again help here, it is still likely to be a huge burden for a few decades to come. In addition, disposing of data centre waste takes on a whole new level of complexity off-planet.


India’s Digital Data Protection Framework: Safety, Trust and Resilience

The draft rules cover various key areas, including the responsibilities of Data Fiduciaries, the role of Consent Managers, and protocols for State Data Processing, particularly in contexts like the distribution of subsidies and public services. They also detail measures for Breach Notifications, mechanisms for individuals to exercise their Data Rights, and special provisions for processing data related to children and persons with disabilities. The Data Protection Board, central to the enforcement of the Act, is set to function as a fully digital office, streamlining its operations and improving accessibility. Additionally, the rules outline procedures for appealing decisions through the Appellate Tribunal, ensuring accountability at every stage. One of the defining aspects of the draft rules is their alignment with the SARAL framework, which emphasises simplicity, clarity, and contextual definitions. To aid public understanding, illustrative examples and explanatory notes have been included, making the document accessible to stakeholders across industries, government bodies, and civil society. Both the draft rules and the accompanying explanatory notes are available on the MeitY website for public review and consultation. While legislative measures are being formalised, the government has swiftly addressed recent data breaches.


The Rise of AI Agents and Data-Driven Decisions

“In 2025, AI agents will take generative AI to the next level by moving beyond content creation to active participation in daily business operations,” he says. “These agents, capable of partial or full autonomy, will handle tasks like scheduling, lead qualification, and customer follow-ups, seamlessly integrating into workflows. Rather than replacing generative AI, they will enhance its utility by transforming insights into immediate, actionable outcomes.” Kawasaki emphasizes the developer-centric benefits as well. “AI agents will become faster and easier to build as low-code and no-code platforms mature, reducing the complexity of creating intelligent, AI-powered scenarios,” he says. ... “AI will play a transformative role in the fortification of cyber security by addressing challenges like scalability, prioritization and speed to detection. Unfortunately, cyber threats have become commonplace on the network and attackers are becoming more sophisticated in their methods – many times operating at a threshold that is very difficult to detect. As a result, organizations that fail to integrate an AI capability into their defense strategy risk being exposed to business-altering vulnerabilities. AI’s ability to monitor vast networks for imperceptible anomalies allows organizations to prioritize the most critical threats in real-time.”


New HIPAA Cybersecurity Rules Pull No Punches

Since the beginning, HIPAA has always been the best, yet insufficient, regulation dictating cybersecurity for the healthcare industry. "[There's] a history of the focus being in the wrong place because of the way HIPAA was laid out in the mid-1990s," says Errol Weiss, chief information security officer (CISO) of the Healthcare Information Sharing and Analysis Center (Health-ISAC). ... The newly proposed Security Rule aims to fix things up, with a laundry list of new requirements that touch on patch management, access controls, multifactor authentication (MFA), encryption, backup and recovery, incident reporting, risk assessments, compliance audits, and more. As Lawrence Pingree, vice president at Dispersive, acknowledges, "People have a love-hate relationship with regulations. But there's a lot of good that comes from HIPAA becoming a lot more prescriptive. Whenever you are more specific about the security controls that they must apply, the better off you are." ... Joseph J. Lazzarotti, principal at Jackson Lewis P.C., says provision 164.306 allowed for the kind of flexibility businesses always ask for: "That we're not expecting the same thing from every solo practitioner on Main Street in the Midwest versus the large hospital on the East Coast. There are obviously going to be different expectations for compliance."



Quote for the day:

“Do the best you can until you know better. Then when you know better, do better.” -- Maya Angelou

Daily Tech Digest - January 03, 2025

Tech predictions 2025 – what could be in store next year?

In 2025, we will hear of numerous cases where threat actors trick a corporate Gen AI solution into giving up sensitive information and causing high-profile data breaches. Many enterprises are using Gen AI to build customer-facing chatbots, in order to aid everything from bookings to customer service. Indeed, in order to be useful, LLMs must ultimately be granted access to information and systems in order to answer questions and take actions that a human would otherwise have been tasked with. As with any new technology, we will witness numerous corporations grant LLMs access to huge amounts of potentially sensitive data, without appropriate security considerations. ... The future of work won’t be a binary choice between humans or machines.  It will be an “and.” AI-powered humanoids will form a part of the future workforce, and we will likely see the first instance happen next year. This will force companies to completely reimagine their workplace dynamics – and the technology that powers them. ... At the same time, organisations must ensure their security postures keep pace. Not only to ensure the data being processed by humanoids is kept safe, but also to keep the humanoids safeguarded from hacking and threatening tweaks to their software and commands. 


7 Private Cloud Trends to Watch in 2025

A lot of organizations are repatriating workloads to private cloud from public cloud, but Rick Clark, global head of cloud advisory at digital transformation solutions company UST warns they aren’t giving it much forethought, like they did earlier when migrating to public clouds. As a result, they’re not getting the ROI they hope for. “We haven’t still figured out what is appropriate for workloads. I’m seeing companies wanting to move back the percentage of their workload to reduce cost without really understanding what the value is so they’re devaluing what they're doing,” says Clark. ... Artificial intelligence and automation are also set to play a crucial role in private cloud management. They enable businesses to handle growing complexity by automating resource optimization, enhancing threat detection, and managing costs. “The ongoing talent shortage in cybersecurity makes [AI and automation] especially valuable. By reducing manual workloads, AI allows companies to do more with fewer resources,” says Trevor Horwitz, CISO and founder at cybersecurity, consulting, and compliance services provider. ... Security affects all aspects of a cloud journey, including the calculus of when and where to use private cloud environments. One significant challenge is making sure that all layers of the stack have detection and response capability.


Agility in Action: Elevating Safety through Facial Recognition

Facial Recognition Technology (FRT) stands out as a leading solution to these problems, protecting not only the physical boundaries but also the organization’s overall integrity. Through precise identity verification and user validation, FRT considerably lowers the possibility of unauthorized access. Organizations, irrespective of size, can benefit from this technology, which offers improved security and operational effectiveness. ... A comprehensive physical security program with interconnected elements serves as the backbone of any security infrastructure. Regulating who can enter or exit a facility is vital. Effective systems include traditional mechanical methods, such as locks and keys, as well as electronic solutions like RFID cards. By using these methods, only authorized persons are able to enter. Nonetheless, a technological solution that works with many Original Equipment Manufacturers (OEMs) is required to successfully counter today’s dangers. In addition to guaranteeing general user convenience, this technology should give top priority to data privacy and safety compliance.
Effective physical security is built on deterring unauthorized entry and identifying people of interest. This can include anything from physical security personnel to surveillance and access control systems.


Strategies for Managing Data Debt in Growing Organizations

Not all data debt is created equal. Growing organizations experiencing data sprawl at an expanding rate must conduct a thorough impact assessment to determine which aspects of their data debt are most harmful to operational efficiency and strategic initiatives. An effective approach involves quantifying the potential risks associated with each type of debt – such as compliance violations or lost customer insights – and calculating the opportunity cost of maintaining versus mitigating them. ... A core approach to managing data debt is to establish strong data governance practices that address inconsistencies and fragmentation. Before anything else, you must establish an adequate access control system and ensure its imperviousness. Next, you must think about implementing robust validation mechanisms that will help prevent further debt accumulation. Data governance frameworks provide a foundation for minimizing ad hoc fixes, which are the primary drivers of data debt. ... An architectural shift that facilitates scalability can help avoid the bottlenecks that arise when data outgrows its infrastructure. Technologies like cloud platforms offer scalability without heavy up-front investments, allowing organizations to expand their capacity in line with their growth.


Secure by design vs by default – which software development concept is better?

The challenge here is that, while from a security perspective we may agree that it is wise, it could inevitably put developers and vendors at a competitive disadvantage. Those who don’t prioritize secure-by-design can get features, functionality, and products out to market faster, leading to potentially more market share, revenue, customer attraction/retention, and more. Additionally, many vendors are venture-capital backed, which comes with expectations of return on investment — and the reality that cyber is just one of many risks their business is facing. They must maintain market share, hit revenue targets, deliver customer satisfaction, raise brand awareness/exposure, and achieve the most advantageous business outcomes. ... Secure-by-default development focuses on ensuring that software components arrive at the end-user with all security features and functions fully implemented, with the goal of providing maximum security right out of the box. Most cyber professionals have experienced having to apply CIS Benchmarks, DISA STIGs, vendor guidance and so on to harden a new product or software to ensure we reduce its attack surface. Secure-by-default flips that paradigm on its head so that products arrive hardened and require customers to roll back or loosen the hardened configurations to tailor them to their needs.


The modern CISO is a cornerstone of organizational success

Historically, CISOs focused on technical responsibilities, including managing firewalls, monitoring networks, and responding to breaches. Today, they are integral to the C-suite, contributing to decisions that align security initiatives with organizational goals. This shift in responsibilities reflects the growing realization that security is not just an IT function but a critical enabler of business goals, customer trust, and competitive advantage. CISOs are increasingly embedded in the strategic planning process, ensuring that cybersecurity initiatives support overall business goals rather than operate as standalone activities. ... One of the most critical aspects of the modern CISO role is integrating security into operational processes without disrupting productivity. This involves working closely with operations teams to design workflows prioritizing efficiency and security. This aspect of their responsibility ensures that security does not become a bottleneck for business operations but enhances operational resilience, efficiency, and productivity. ... The CISO of tomorrow will redefine success by aligning cybersecurity with business objectives, fostering a culture of shared responsibility, and driving resilience in the face of emerging risks like AI-driven attacks, quantum threats, and global regulatory pressures.


Key Infrastructure Modernization Trends for Enterprises

Cloud providers and data centers need advanced cooling technologies, including rear-door heat exchange, immersion and direct-to-chip systems. Sustainable power sources such as solar and wind must supplement traditional energy resources. These infrastructure changes will support new chip generations, increased rack densities and expanding AI requirements while enabling edge computing use cases. "Liquid cooling has evolved to move from cooling the broader data center environment to getting closer and even within the infrastructure," Hewitt said. "Liquid-cooled infrastructure remains niche today in terms of use cases but will become more predominant as next generations of GPUs and CPUs increase in power consumption and heat production." ... Document existing business processes and workflows to improve visibility and identify gaps suitable for AI implementation. Organizations must organize data for AI tools that can bring in improvements, keep track of where the data resides to organize it for AI use, build internal guidelines for training and testing AI-driven workflows, and create robust controls for processes that incorporate AI agents.


Being Functionless: How to Develop a Serverless Mindset to Write Less Code!

As the adoption of FaaS increased, cloud providers added a variety of language runtimes to cater to different computational needs, skills, etc., offering something for most programmers. Language runtimes such as Java, .NET, Node.js, Python, Ruby, Go, etc., are the most popular and widely adopted. However, this also brings some challenges to organizations adopting serverless technology. More than technology challenges, these are mindset challenges for engineers. ... Sustainability is a crucial aspect of modern cloud operation. Consuming renewable energy, reducing carbon footprint, and achieving green energy targets are top priorities for cloud providers. Cloud providers invest in efficient power and cooling technologies and operate an efficient server population to achieve higher utilization. For this reason, AWS recommends using managed services for efficient cloud operation, as part of their Well-Architected Framework best practices for sustainability. ... For engineers new to serverless, equipping their minds to its needs can be challenging. Hence, you hear about the serverless mindset as a prerequisite to adopting serverless. This is because working with serverless requires a new way of thinking, developing, and operating applications in the cloud. 


Unlocking opportunities for growth with sovereign cloud

Although there is no standard definition of what constitutes a “sovereign cloud,” there is a general understanding that it must ensure sovereignty at three fundamental levels: data, operations, and infrastructure. Sovereign cloud solutions, therefore, have highly demanding requirements when it comes to digital security and the protection of sensitive data, from technical, operational, and legal perspectives. The sovereign cloud concept also opens up avenues for competition and innovation, particularly among local cloud service providers within the UK. In a recent PwC survey, 78% of UK business leaders said they have adopted cloud in most or all areas of their organisations. However, many of these cloud providers operate and function outside of the country, usually across the pond. The development of sovereign cloud offerings provides the perfect push for UK cloud service providers to increase their market share, providing local tools to power local innovation. For a large-scale, accessible, and competitive sovereign cloud ecosystem to emerge, a combination of certain factors is essential. Firstly, partnerships are crucial. Developing local sovereign cloud solutions that offer the same benefits and ease of use as large hyperscalers is a significant challenge.


The Tipping Point: India's Data Center Revolution

"Data explosion and data localization are paving the way for a data center revolution in India. The low data tariff plans, access to affordable smartphones, adoption of new technologies and growing user base of social media, e-commerce, gaming and OTT platforms are some of the key triggers for data explosion. Also, AI-led demand, which is expected to increase multi-fold in the next 3-5 years, presents significant opportunities. This, coupled with favourable regulatory policies from the Central and State governments, the draft Digital Personal Data Protection Bill, and the infrastructure status are supporting the growth prospects," said Anupama Reddy, Vice President and Co-Group Head - Corporate Ratings, ICRA. ... The high-octane data center industry comes with its own set of challenges. The data center industry faces high operational costs alongside challenges in scalability, cybersecurity, sustainability, and skilled workforce. Power and cooling are major cost drivers, with data centers consuming 1-1.5 per cent of global electricity. Advanced cooling solutions and energy-efficient hardware can help reduce energy costs while supporting environmental goals.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree

Daily Tech Digest - January 02, 2025

7 Practices to Bolster Cloud Security and Keep Attackers at Bay

AI tools can facilitate quicker threat detection, investigation, and response. All healthy cloud security postures should utilize ML-based user and entity behavior analytics (UEBA) tools. Such tools effectively identify anomalous behavior across the network, while facilitating rapid investigation of potential threats and automating responses to mitigate and remediate attacks. Ideally, security professionals want to find vulnerabilities before an attack occurs, and such AI tools can help to do just that. ... When a threat occurs in the cloud, it can sometimes be difficult to assess the potential impact across a distributed or multitenant surface. By utilizing a centralized platform, security personnel have access to a response center that can automate workflows by orchestrating with different cloud applications, which in turn reduces the mean time to resolve (MTTR) incidents and threats. ... By correlating access and security logs from cloud applications, security personnel can identify attempts at data exfiltration from the cloud. As a quick example, if a SOC professional is investigating potential customer data exfiltration from a cloud-based CRM tool, he or she would want to correlate the logs of that CRM tool with the logs of other cloud applications, such as email or team communication tools. 


6 AI-Related Security Trends to Watch in 2025

As more organizations work to embed AI capabilities into their software, expect to see DevSecOps, DataOps, and ModelOps — or the practice of managing and monitoring AI models in production — converge into a broader, all-encompassing xOps management approach, Holt says. The push to AI-enabled software is increasingly blurring the lines between traditional declarative apps that follow predefined rules to achieve specific outcomes, and LLMs and GenAI apps that dynamically generate responses based on patterns learned from training data sets, Holt says. ... The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. ... The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. ... "If unchecked, this raises serious questions and concerns about data loss prevention as well as compliance concerns as new regulations like the EU AI Act start to take effect," she says. 


Working in Cyber Threat Intelligence (CTI)

“The analysis of an adversary’s intent, opportunity, and capability to do harm is known as cyber threat intelligence.” It’s not just about finding some IOCs and sending them to the SOC. It’s about providing context about adversary activity for other security teams to help prioritize cyber defense efforts. While there are more steps than this, in short we collect intrusion data and analyze it, looking for correlations and trends to observed malicious activity. With that analyzed activity and trends, we can provide actionable insights into malicious activity to keep defenders focused only on the most relevant. ... Aside from everything in the “What CTI Isn’t” section, the biggest challenge in CTI is that it’s next to impossible to get decent intel requirements. “Just get us intel” isn’t a thing. We need information to give relevant information. What strategic initiatives, products, technologies, partnerships, etc. are of particular interest to the leadership? What are all of your countries of operation? What are considered the most critical assets? How would a threat actor achieving their objectives impede the organization’s mission? It unfortunately is an ongoing problem that many CTI analysts and CTI management struggle with. This often leads to intel analysts winging it.


What’s Ahead in Generative AI in 2025?

In the coming year, prompt engineering will continue its rapid maturation into a substantial body of proven practices for eliciting the correct output from LLMs and other foundation models. Within generative AI development tool sets, embedding libraries will become an essential component for developers to build increasingly sophisticated similarity searches that span a diverse range of data modalities. The recent TDWI survey on enterprise AI readiness shows that 28% of organizations already use or are deploying vector databases to store vector embeddings for use with AI models, while 32% plan to adopt those databases in the next few years. In addition, generative AI developers in 2025 will have access to a growing range of tools for no-code development of “agentic” applications that provide autonomous LLM-driven copilot, chatbot, and other functionality and that can be orchestrated over more complex process environments. ... Developers will have access in 2025 to a growing range of sophisticated models and data for building, training, and optimizing generative AI applications—including both commercial and open-source models. The recent TDWI survey on data and analytics trends showed that around 25% of enterprises are experimenting with private or public generative AI models, while 17% are building generative AI apps that use company data with pretrained models. 


This Is The Phrase That Instantly Damages Your Leadership Integrity

There are few phrases that have the ability to instantly cause hesitation like the phrase “to be honest with you.” Here are a few other honorable mentions that cause the same damage for the same reasons. In all honesty… Frankly… To tell you the truth… Truthfully or truthfully speaking… When you casually use a statement like “to be honest with you,” in an effort to ensure that you’re more likely to be believed, the exact opposite happens. Instead of trusting you more, listeners trust you less. ... Without leadership integrity, you’d have a very heavy lift trying to get people to believe in you, to listen to you, to count on you and to give you the benefit of the doubt that leaders so desperately need during times of uncertainty, ambiguity and crisis. This is why you don’t want to damage your leadership integrity or cause people to question your credibility by throwing out unthoughtful words or phrases that could give them pause. ... Instead of saying something like “mistakes were made,” which shows a complete lack of leadership integrity and sends the signal that someone somewhere made a mistake but you take no ownership for it. Go ahead and accept responsibility and show that you are accountable for the mistake and for the resolution as well.


Generative AI is not going to build your engineering team for you

Generative AI is like a junior engineer in that you can’t roll their code off into production. You are responsible for it—legally, ethically, and practically. You still have to take the time to understand it, test it, instrument it, retrofit it stylistically and thematically to fit the rest of your code base, and ensure your teammates can understand and maintain it as well. The analogy is a decent one, actually, but only if your code is disposable and self-contained, i.e. not meant to be integrated into a larger body of work, or to survive and be read or modified by others. And hey—there are corners of the industry like this, where most of the code is write-only, throwaway code. ... To state the supremely obvious: giving code review feedback to a junior engineer is not like editing generated code. Your effort is worth more when it is invested into someone else’s apprenticeship. It’s an opportunity to pass on the lessons you’ve learned in your own career. Even just the act of framing your feedback to explain and convey your message forces you to think through the problem in a more rigorous way, and has a way of helping you understand the material more deeply. And adding a junior engineer to your team will immediately change team dynamics. It creates an environment where asking questions is normalized and encouraged, where teaching as well as learning is a constant. 


Architectural Decision-Making: AI Tools as Consensus Builders

In an environment with lots of smart, quick-thinking people it can be a challenge to ensure everyone is heard, especially when the primary mode of interaction is videoconferencing. The online format (a Microsoft Teams group chat) gave people time to contribute their thoughts over a period of days rather than minutes. At various points in the online conversation, participants extracted content from the online discussion board and fed it to a large language model to compare ideas that were present in the dialogue, or to recast the dialogue in a particular person’s voice. ... The benefits of using AI tools are not cost free. It’s important to verify the results of an AI’s synthesis of text because sometimes the AI misinterprets what was written. For example, during our discussion of capabilities and domains, an AI tool interpreted some of my text as stating that the boundaries of a domain are context dependent when in fact, I was making the opposite argument – that a domain must have a consistent definition that is valid across any contexts in which it participates. Another consideration is the ethics of intellectual property ownership and citation of participants’ contributions. 


Perhaps the biggest challenge of IaC operations is drifts — a scenario where runtime environments deviate from their IaC-defined states, creating a festering issue that could have serious long-term implications. These discrepancies undermine the consistency of cloud environments, leading to potential issues with infrastructure reliability and maintainability and even significant security and compliance risks. ... But having additional context for drift, as important as it may be, is only one piece of a much bigger puzzle. Managing large cloud fleets with codified resources introduces more than just drift challenges, especially at scale. Current-gen IaC management tools are effective at addressing resource management, but the demand for greater visibility and control in enterprise-scale environments is introducing new requirements and driving their inevitable evolution. ... The combination of IaC management and CAM empowers teams to manage complexity with clarity and control. As the end of the year approaches, it's 'prediction season' — so here’s mine. Having spent the better part of the last decade building and refining one of the more popular IaC management platforms, I see this as the natural progression of our industry: combining IaC management, automation, and governance with enhanced visibility into non-codified assets.


4 keys for writing cross-platform apps

One big problem with cross-platform compiling is how asymmetrical it can be. If you’re a macOS user, it’s easy to set up and maintain Windows or Linux virtual machines on the Mac. If you use Linux or Windows, it’s harder to emulate macOS on those platforms. Not impossible, just more difficult—the biggest reason being the legal issues, as macOS’s EULA does not allow it to be used on non-Apple hardware. The easiest workaround is to simply buy a separate Macintosh system and use that. Another option is to use tools like osxcross to perform cross-compilation on a Linux, FreeBSD, or OpenBSD system. Another common option, one most in line with modern software delivery methods, is to use a system like GitHub Actions. The downside is paying for the use of the service, but if you’re already invested in either platform, it’s often the most economical and least messy approach. Plus, it keeps the burden of system maintenance out of your hands. ... The way we write and deploy apps is always in flux. Who would have anticipated the container revolution, for instance? Or predicted the dominant language for machine learning and AI would be Python? To that end, it’s always worth keeping an eye on the future, since cross-platform deployment is fast becoming a must-have feature.


The Connected Revolution: How Integrated Intelligence is Reshaping Drug Development

CI and end-to-end quality are dismantling traditional silos and fostering a seamless, data-driven ecosystem. The use of CI, potentially with data lakes as a way of consolidating vast amounts of data from disparate sources, removes silos that exist between independent systems sitting with siloed departments. The movement of data, for example clinical data that is needed in regulatory submissions, or safety data that is needed alongside regulatory data for regulatory reports, brings a level of fluidity to data management and helps companies optimize time and resources to generate product quality and safety insights. ... For clinical trials, CI and end-to-end quality can significantly enhance patient recruitment and retention. Advanced analytics can identify suitable candidates more efficiently, while real-time monitoring through connected devices can provide continuous data on patient responses and the identification of potential adverse events. This improves the quality of data collected, enhances patient safety and reduces trial time and cost. ... CI and AI-driven regulatory intelligence, in the context of quality-controlled procedures, can support the gathering of global submission requirements and the creation of global submission content, which will then be subject to human review as part of QC.



Quote for the day:

"A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves." -- Laotzu

Daily Tech Digest - January 01, 2025

The Architect’s Guide to Open Table Formats and Object Storage

Data lakehouse architectures are purposefully designed to leverage the scalability and cost-effectiveness of object storage systems, such as Amazon Web Services (AWS) S3, Google Cloud Storage and Azure Blob Storage. ... Data lakehouse architectures are purposefully designed to leverage the scalability and cost-effectiveness of object storage systems, such as Amazon Web Services (AWS) S3, Google Cloud Storage and Azure Blob Storage. This integration enables the seamless management of diverse data types — structured, semi-structured and unstructured — within a unified platform. ... The open table formats also incorporate features designed to boost performance. These also need to be configured properly and leveraged for a fully optimized stack. One such feature is efficient metadata handling, where metadata is managed separately from the data, which enables faster query planning and execution. Data partitioning organizes data into subsets, improving query performance by reducing the amount of data scanned during operations. Support for schema evolution allows table formats to adapt to changes in data structure without extensive data rewrites, ensuring flexibility while minimizing processing overhead.


The future of open source will be messy

First, it’s important to point out that open source software is both pervasive and foundational. Where would we be without Linux and the vast treasure trove of other open source projects on which the internet is built? However, the vast majority of software, written for use or sale, is not open source. This has always been true. Developers do care about open source, and for good reason, but it is not their top concern. As Redis CEO Rowan Trollope told me in a recent interview, “If you’re the average developer, what you really care about is capability: Does this [software] offer something unique and differentiated that’s awesome that I need in my application.” ... Meanwhile, Meta and the rest of the industry keep releasing new code, calling it open source or open weights (Sam Johnston offers a great analysis), without much concern for what the OSI or anyone else thinks. Johnston may be exaggerating when he says, “The more [the word] open appears in an artificial intelligence product’s branding, the less open it actually tends to be,” but it’s clear that the term open gets used a lot, starting with category leader OpenAI, which is not open in any discernible sense, without much concern for any traditional definitions. 


What’s next for generative AI in 2025?

“Data is the lifeblood of any AI initiative, and the success of these projects hinges on the quality of the data that feeds the models,” said Andrew Joiner, CEO of Hyperscience, which develops AI-based office work automation tools. “Alarmingly, three out of five decision makers report their lack of understanding of their own data inhibits their ability to utilize genAI to its maximum potential. The true potential…lies in adopting tailored SLMs, which can transform document processing and enhance operational efficiency.” Gartner recommends that organizations customize SLMs to specific needs for better accuracy, robustness, and efficiency. “Task specialization improves alignment, while embedding static organizational knowledge reduces costs. Dynamic information can still be provided as needed, making this hybrid approach both effective and efficient,” the research firm said. ... While Agentic AI architectures are a top emerging technology, they’re still two years away from reaching the lofty automation expected of them, according to Forrester. While companies are eager to push genAI into complex tasks through AI agents, the technology remains challenging to develop because it mostly relies on synergies between multiple models, customization through retrieval augmented generation (RAG), and specialized expertise. 


The Perils of Security Debt: Serious Pitfalls to Avoid

Security debt is caused by a failure to “build security in” to software from the design to deployment as part of the SDLC. Security debt accumulates when a development organization releases software with known issues, deferring the redressal of its weaknesses and vulnerabilities. Sometimes the organization skips certain test cases or scenarios in pursuit of faster deployment and in the process failing to test software thoroughly. Sometimes the business decides that the pressure to finish a project is so great that it makes more sense to release now and fix issues later. Later is better than never, but when “later” never arrives, existing security debt becomes worse. ... Great leadership is the beacon that not only charts the course but also ensures your crew – your IT team, support staff, and engineers – are well-prepared to face the challenges ahead. It instills discipline, vigilance, and a culture of security that can withstand the fiercest digital storms. The Board and leadership must understand and champion the importance of security for the organization. By setting the tone at the top, they can drive the cultural and procedural changes needed to prevent the accumulation of the security debt. Periodic review and monitoring of security metrics, and identifying & tracking security debt as a risk can help keep the organization accountable and on track.


The long-term impacts of AI on networking

Every enterprise who self-hosted AI told me the mission demanded more bandwidth to support “horizontal” traffic than their normal applications, more than their current data center needed to support. Ten of the group said that this meant they’d need the “cluster” of AI servers to have faster Ethernet connections and higher-capacity switches. Everyone agreed that a real production deployment of on-premises AI would need new network devices, and fifteen said they bought new switches even for their large-scale trials. The biggest problem with the data center network I heard from those with experience is that they believed they built up more of an AI cluster than they needed. Running a popular LLM, they said, requires hundreds of GPUs and servers, but small language models can run on a single system, and a third of current self-hosting enterprises said they believed it is best to start small, with small models, and build up only when you had experience and could demonstrate a need. This same group also pointed out that control was needed to ensure only truly useful AI applications where run. “Applications otherwise build up, exceed, and then increase, the size of the AI cluster,” said users. 


Bridging Skill Gaps in the Automotive Industry with AI-Led Immersive Simulations

This crisis of personnel shortfall is particularly acute in sectors like autonomous driving and AI-driven manufacturing, where the required skillset surpasses the capabilities of the current workforce. This alarming shortage of specialised expertise poses a serious threat to the industry’s progress. It could potentially lead to production halts at various facilities, delay the launch of next-generation vehicles, and hinder the transition to self-driving cars powered by sustainable energy. In order to address this issue, orthodox educational methods must be modernised to incorporate cutting-edge technologies like AI and robotics. ... Unlike traditional training, which often involves static lessons or expensive hands-on practice, immersive simulations allow workers to practice in environments that would be too risky or costly in real life. For example, with autonomous vehicles, workers can practice fixing and calibrating vehicle systems in a virtual world without the risk of damaging anything. These simulations can also create different road conditions for workers to experience, helping them build critical decision-making skills without real-world consequences. 


AI agents might be the new workforce, but they still need a manager

AI agents need to be thoughtfully managed, just as is the case with human work, and there's work to be done before an agentic AI-driven workforce can truly assume a broad range of tasks. "While the promise of agentic AI is evident, we are several years away from widespread agentic AI adoption at the enterprise level," said Scott Beechuk, partner with Norwest Venture Partners. "Agents must be trustworthy given their potential role in automating mission-critical business processes." The traceability of AI agents' actions is one issue. "Many tools have a hard time explaining how they arrived at their responses from users' sensitive data and models struggle to generalize beyond what they have learned," said Ananthakrishnan. ... Unpredictability is a related challenge, as LLMs "operate like black boxes," said Beechuk. "It's hard for users and engineers to know if the AI has successfully completed its task and if it did so correctly." ... Human workers also are capable of collaborating easily and on a regular basis. For AI workers, it's a different story. "Because agents will interact with multiple systems and data stores, achieving comprehensive visibility is no easy task," said Ananthakrishnan. It's important to have visibility to capture each action an agent takes.


Change management: Achieve your goals with the right change model

You need a good leadership team of influential people who are all pulling in the same direction. This is the only way to implement upcoming changes and anchor them in the company. It is important to include people in the leadership team who have a great deal of influence and/or are well respected by the workforce. At the same time, these people must be fully committed to the planned change. ... Communication comes before implementation. Those affected must understand it to become participants or supporters. Initiating measures without first explaining the context to those involved would unnecessarily create unrest in the company. When communicating, it makes sense to proceed in several steps: the change team first informs the clients and gets a “go” from them. After that, the change team informs the managers so that they can answer questions from employees during company-wide communication. ... Quick wins must be realized and made visible to increase motivation. Quick wins should therefore also be identified when defining objectives, because success is important to ensure that the initial motivation does not fizzle out. Initial successes should be related to the overarching goal, because then they strengthen intrinsic motivation. Small successes can thus have a big impact.


Forrester on cybersecurity budgeting: 2025 will be the year of CISO fiscal accountability

Forrester sees the increasing adoption of AI and generative AI (gen AI) as driving the needed updates to infrastructure. “Any Gen AI project that we discussed with customers ultimately becomes a data integration project,” says Pascal Matska, vice president and research director at Forrester. “You have to invest into specific capabilities and platforms that run specific AI workloads in the most suitable infrastructure at the right price point, and also drive investments into cloud-native technologies such as Kubernetes and containers and modern data platforms that really are there to help you drive out some of the frictions that exist within the different business silos,” Matska continued. ... CISOs who drive gains in revenue advance their careers. “When something touches as much revenue as cybersecurity does, it is a core competency. And you can’t argue that it isn’t,” Jeff Pollard, VP and principal analyst at Forrester, said during his keynote titled “Cybersecurity Drives Revenue: How to Win Every Budget Battle” at the company’s Security and Risk Forum in 2022. Budgeting to protect revenue needs to start with the weakest, most at-risk areas. These include software supply chain security, API security, human risk management, and IoT/OT threat detection. 


Passkey technology is elegant, but it’s most definitely not usable security

"The problem with passkeys is that they're essentially a halfway house to a password manager, but tied to a specific platform in ways that aren't obvious to a user at all, and liable to easily leave them unable to access ... their accounts," wrote the Danish software engineer and programmer, who created Ruby on Rails and is the CTO of web-based software development firm 37signals. "Much the same way that two-factor authentication can do, but worse, since you're not even aware of it." ... The security benefits of passkeys at the moment are also undermined by an undeniable truth. Of the hundreds of sites supporting passkeys, there isn't one I know of that allows users to ditch their password completely. The password is still mandatory. And with the exception of Google's Advanced Protection Program, I know of no sites that won't allow logins to fall back on passwords, often without any additional factor. ... Under the FIDO2 spec, the passkey can never leave the security key, except as an encrypted blob of bits when the passkey is being synced from one device to another. The secret key can be unlocked only when the user authenticates to the physical key using a PIN, password, or most commonly a fingerprint or face scan. In the event the user authenticates with a biometric, it never leaves the security key, just as they never leave Android and iOS phones and computers running macOS or Windows.



Quote for the day:

"You are a true success when you help others be successful." -- Jon Gordon