Showing posts with label confidential computing. Show all posts
Showing posts with label confidential computing. Show all posts

Daily Tech Digest - April 04, 2026


Quote for the day:

“We are what we pretend to be, so we must be careful about what we pretend to be.” -- Kurt Vonnegut


🎧 Listen to this digest on YouTube Music

▶ Play Audio Digest

Duration: 22 mins • Perfect for listening on the go.


One-Time Passcodes Are Gateway for Financial Fraud Attacks

The article "One-Time Passcodes Are Gateway for Financial Fraud Attacks" highlights the increasing vulnerability of SMS-based one-time passcodes (OTPs) as a primary authentication method. Threat intelligence from Recorded Future reveals that fraudsters are increasingly exploiting real-time communication weaknesses through social engineering and impersonation to intercept these codes, facilitating account takeovers and payment fraud. This shift indicates a growing industrialization of fraud operations where attackers no longer need to defeat complex technical security controls but instead manipulate user behavior during live interactions. Security experts, including those from Coalition, argue that OTPs represent "low-hanging fruit" for cybercriminals and advocate for phishing-resistant alternatives like FIDO-based hardware authentication. Consequently, global regulators are taking action to mitigate these risks. For instance, Singapore and the United Arab Emirates have already phased out SMS-based OTPs for banking logins, while India and the Philippines are moving toward multifactor approaches involving biometrics and device-based identification. Although U.S. regulators still recognize OTPs as part of multifactor authentication, the rise of SIM-swapping and sophisticated social engineering is pushing the financial industry toward more resilient, multi-signal authentication models that integrate behavioral patterns and device identity to better balance security with user experience.


Evaluating the ethics of autonomous systems

MIT researchers, led by Professor Chuchu Fan and graduate student Anjali Parashar, have developed a pioneering evaluation framework titled SEED-SET to assess the ethical alignment of autonomous systems before their deployment. This innovative system addresses the challenge of balancing measurable outcomes, such as cost and reliability, with subjective human values like fairness. Designed to operate without pre-existing labeled data, SEED-SET utilizes a hierarchical structure that separates objective technical performance from subjective ethical criteria. By employing a Large Language Model as a proxy for human stakeholders, the framework can consistently evaluate thousands of complex scenarios without the fatigue often experienced by human reviewers. In testing involving realistic models like power grids and urban traffic routing, the system successfully pinpointed critical ethical dilemmas, such as strategies that might inadvertently prioritize high-income neighborhoods over disadvantaged ones. SEED-SET generated twice as many optimal test cases as traditional methods, uncovering "unknown unknowns" that static regulatory codes often miss. This research, presented at the International Conference on Learning Representations, provides a systematic way to ensure AI-driven decision-making remains well-aligned with diverse human preferences, moving beyond simple technical optimization to foster more equitable technological solutions for high-stakes societal challenges.


Blast Radius of TeamPCP Attacks Expands Amid Hacker Infighting

The article "Blast Radius of TeamPCP Attacks Expands Amid Hacker Infighting" details the escalating impact of supply chain compromises targeting open-source projects like LiteLLM and Trivy. Attributed to the threat group TeamPCP, these attacks have victimized high-profile entities such as the European Commission and AI startup Mercor by harvesting cloud credentials and API keys. The situation has become increasingly volatile due to "infighting" and a lack of clear collaboration between cybercriminal factions. While TeamPCP initiates the intrusions, groups like ShinyHunters and Lapsus$ have begun leaking and claiming credit for the stolen data, leading to a murky ecosystem where multiple actors converge on the same access points. Further complicating the threat landscape is TeamPCP's formal alliance with the Vect ransomware gang, which utilizes a three-stage remote access Trojan to deepen their foothold. Security experts emphasize that the speed of these attacks—often moving from initial compromise to data exfiltration within hours—necessitates a rapid response. Organizations are urged to move beyond merely removing malicious packages; they must immediately revoke exposed secrets, rotate cloud credentials, and audit CI/CD workflows to mitigate the risk of follow-on extortion and ransomware deployment by this expanding criminal network.


Beyond RAG: Architecting Context-Aware AI Systems with Spring Boot

The article "Beyond RAG: Architecting Context-Aware AI Systems with Spring Boot" introduces Context-Augmented Generation (CAG), an architectural refinement designed to address the limitations of standard Retrieval-Augmented Generation (RAG) in enterprise environments. While traditional RAG successfully grounds AI responses in external data, it often ignores vital runtime factors such as user identity, session history, and specific workflow states. CAG solves this by introducing a dedicated context manager that assembles and normalizes these contextual signals before they reach the core RAG pipeline. This additional layer allows systems to provide answers that are not only factually accurate but also contextually appropriate for the specific user and situation. A key advantage of this design is its modularity; the context manager operates independently of the retriever and large language model, requiring no changes to the underlying infrastructure or model retraining. By isolating contextual reasoning, enterprise teams can achieve better traceability, consistency, and governance across their AI applications. Specifically targeting Java developers, the piece demonstrates how to implement this pattern using Spring Boot, moving AI beyond simple prototypes toward production-ready systems that can handle complex, multi-departmental constraints and dynamic organizational policies with much greater precision.


Eliminating blind spots – nailing the IPv6 transition

The article "Eliminating blind spots – nailing the IPv6 transition" highlights the critical shift from IPv4 to IPv6, noting that global adoption reached 45% by 2026. Despite this growth, many IT teams remain overly reliant on legacy dual-stack monitoring that prioritizes IPv4, leading to significant visibility gaps. Because IPv6 operates differently—utilizing 128-bit addresses and emphasizing ICMPv6 and AAAA records—traditional scanning and monitoring methods often fail to detect degraded performance or security vulnerabilities. These "blind spots" can result in service outages that teams only discover through user complaints rather than proactive alerts. To navigate this transition successfully, organizations must adopt monitoring solutions with robust auto-discovery capabilities and real-time notifications tailored to IPv6-specific behaviors. The article emphasizes that an effective transition does not require a complete infrastructure rebuild; instead, it demands a mindset shift where IPv6 is treated as a primary protocol rather than a secondary concern. By integrating comprehensive visibility across cloud, data centers, and OT environments, businesses can ensure network resilience and security. Ultimately, proactively addressing these monitoring deficiencies allows IT departments to manage the increasing complexity of modern internet traffic while avoiding the pitfalls of reactive troubleshooting in a rapidly evolving digital landscape.


Post-Quantum Readiness Starts Long Before Q-Day

The Forbes article "Post-Quantum Readiness Starts Long Before Q-Day" by Etay Maor highlights the urgent need for organizations to prepare for the inevitable arrival of "Q-Day"—the moment quantum computers become capable of shattering current public-key cryptography standards. While significant quantum utility may be years away, the author warns of the "harvest now, decrypt later" threat, where malicious actors collect encrypted sensitive data today to decrypt it once quantum technology matures. Consequently, post-quantum readiness must be viewed as a critical leadership and business-risk issue rather than a distant technical concern. Maor argues that the transition will be a multi-year journey, not a simple switch, requiring deep visibility into an organization’s cryptographic sprawl to identify vulnerabilities. He recommends a hybrid security approach, utilizing standards like TLS 1.3 with post-quantum-ready cipher suites to protect high-priority "crown jewel" data while the broader ecosystem catches up. By prioritizing sensitive traffic and adopting a centralized operating model, such as a quantum-aware Secure Access Service Edge (SASE), businesses can build long-term resilience. Ultimately, proactive preparation is essential to safeguarding data confidentiality against the future capabilities of quantum computing, ensuring that security measures evolve alongside emerging threats.


Confidential computing resurfaces as security priority for CIOs

Confidential computing has resurfaced as a critical security priority for CIOs, addressing the long-standing industry gap of protecting data while it is actively being processed. While traditional encryption safeguards data at rest and in transit, confidential computing utilizes hardware-encrypted Trusted Execution Environments (TEEs) to isolate sensitive information from the surrounding infrastructure, cloud providers, and even privileged users. This technology is gaining significant traction as organizations seek to protect intellectual property and regulated analytics workloads, especially within the context of generative AI. According to IDC, 75% of surveyed organizations are already testing or adopting the technology in some form. Unlike earlier versions that required deep technical expertise and application redesign, modern confidential computing integrates seamlessly into existing virtual machines and containers. This evolution allows developers to maintain current workflows while gaining hardware-enforced security boundaries that software controls alone cannot provide. Gartner has notably ranked confidential computing as a top three technology to watch for 2026, highlighting its growing importance in sectors like finance and healthcare. By providing hardware-rooted attestation and verifiable trust, it helps organizations minimize risk exposure and maintain regulatory compliance. Ultimately, as confidential computing converges with AI and data security management platforms, it will become an essential component of a robust zero-trust architecture.


Introducing the Agent Governance Toolkit: Open-source runtime security for AI agents

Microsoft has introduced the Agent Governance Toolkit, an open-source project designed to provide critical runtime security for autonomous AI agents. As AI evolves from simple chat interfaces to independent actors capable of executing complex trades and managing infrastructure, the need for robust oversight has become paramount. Released under the MIT license, this framework-agnostic toolkit addresses the risks outlined in the OWASP Top 10 for Agentic Applications through deterministic, sub-millisecond policy enforcement. The suite comprises seven specialized packages, including "Agent OS" for stateless policy execution and "Agent Mesh" for cryptographic identity and dynamic trust scoring. Drawing inspiration from battle-tested operating system principles, the toolkit incorporates features like execution rings, circuit breakers, and emergency kill switches to ensure reliable and secure operations. It seamlessly integrates with popular frameworks like LangChain and AutoGen, allowing developers to implement governance without rewriting core code. By mapping directly to regulatory requirements like the EU AI Act, the toolkit empowers organizations to proactively manage goal hijacking, tool misuse, and cascading failures. Ultimately, Microsoft’s initiative fosters a secure ecosystem where autonomous agents can scale safely across diverse platforms, including Azure Kubernetes Service, while remaining subject to transparent and community-driven governance standards.


Twinning! Quantum ‘Digital Twins’ Tackle Error Correction Task to Speed Path to Reliable Quantum Computers

Researchers have introduced a groundbreaking classical simulation method that utilizes "digital twins" to significantly accelerate the development of reliable, fault-tolerant quantum computers. By creating highly detailed virtual replicas of quantum hardware, scientists can now model quantum error correction (QEC) processes for systems containing up to 97 physical qubits. This approach addresses the massive overhead traditionally required to stabilize fragile qubits, where multiple physical units are needed to form a single, error-resistant logical qubit. Unlike traditional methods that require building and debugging expensive physical prototypes, these digital twins leverage Monte Carlo simulations to model error propagation and decoding strategies on standard cloud computing nodes in roughly an hour. This shift allows researchers to rapidly iterate and optimize hardware parameters and error-fixing codes without the exorbitant costs and time constraints of physical testing. Functioning essentially as a "virtual wind tunnel," this innovation provides a critical, scalable framework for designing the complex error-correction layers necessary for practical quantum computation. By streamlining the path toward fault tolerance, this digital twin methodology represents a profound, practical advancement that enables the quantum industry to refine complex systems virtually, ultimately bringing the reality of large-scale, dependable quantum computing closer than ever before.


The end of the org chart: Leadership in an agentic enterprise

The traditional organizational chart is becoming obsolete as modern enterprises transition toward an "agentic" model where AI agents and humans collaborate as teammates. According to industry expert Steve Tout, the sheer volume of digital information—now doubling every eight hours—has overwhelmed human judgment, rendering legacy hierarchical structures and the "people-process-technology" framework increasingly insufficient. In this evolving landscape, AI agents handle repeatable cognitive tasks, synthesis, and data-heavy "grunt work," while human professionals retain control over high-level judgment, ethical accountability, and client trust. Organizations like McKinsey are already pioneering this shift, deploying tens of thousands of agents to streamline complex workflows. Leadership is consequently being redefined; it is no longer about maintaining a strict span of control or following predictable reporting lines. Instead, next-generation leaders must become architects of integrated networks, managing both human talent and agentic systems to foster deep organizational intelligence. By protecting human decision-makers from information fatigue, agentic enterprises can achieve greater clarity and faster strategic alignment. Ultimately, success in this new era requires a fundamental shift from viewing technology as a standalone tool to embracing it as a collaborative force that enhances the unique human capacity for sensemaking in complex, fast-moving business environments.

Daily Tech Digest - August 14, 2025


Quote for the day:

"Act as if what you do makes a difference. It does." -- William James


What happens the day after superintelligence?

As context, artificial superintelligence (ASI) refers to systems that can outthink humans on most fronts, from planning and reasoning to problem-solving, strategic thinking and raw creativity. These systems will solve complex problems in a fraction of a second that might take the smartest human experts days, weeks or even years to work through. ... So ask yourself, honestly, how will humans act in this new reality? Will we reflexively seek advice from our AI assistants as we navigate every little challenge we encounter? Or worse, will we learn to trust our AI assistants more than our own thoughts and instincts? ... Imagine walking down the street in your town. You see a coworker heading towards you. You can’t remember his name, but your AI assistant does. It detects your hesitation and whispers the coworker’s name into your ears. The AI also recommends that you ask the coworker about his wife, who had surgery a few weeks ago. The coworker appreciates the sentiment, then asks you about your recent promotion, likely at the advice of his own AI. Is this human empowerment, or a loss of human agency? ... Many experts believe that body-worn AI assistants will make us feel more powerful and capable, but that’s not the only way this could go. These same technologies could make us feel less confident in ourselves and less impactful in our lives.


Confidential Computing: A Solution to the Uncertainty of Using the Public Cloud

Confidential computing is a way to ensure that no external party can look at your data and business logic while it is executed. It looks to secure Data in Use. When you now add to that the already established way to secure Data at Rest and Data in Transit it can be ensured that most likely no external party can access secured data running in a confidential computing environment wherever that may be. ... To be able to execute services in the cloud the company needs to be sure that the data and the business logic cannot be accessed or changed from third parties especially by the system administrator of that cloud provider. It needs to be protected. Or better, it needs to be executed in the Trusted Compute Base (TCB) of the company. This is the environment where specific security standards are set to restrict all possible access to data and business logic. ... Here attestation is used to verify that a confidential environment (instance) is securely running in the public cloud and it can be trusted to implement all the security standards necessary. Only after successful attestation the TCB is then extended into the Public cloud to incorporate the attested instances. One basic requirement of attestation is that the attestation service is located independently of the infrastructure where the instance is running. 


Open Banking's Next Phase: AI, Inclusion and Collaboration

Think of open banking as the backbone for secure, event-driven automation: a bill gets paid, and a savings allocation triggers instantly across multiple platforms. The future lies in secure, permissioned coordination across data silos, and when applied to finance, it unlocks new, high-margin services grounded in trust, automation and personalisation. ... By building modular systems that handle hierarchy, fee setup, reconciliation and compliance – all in one cohesive platform – we can unlock new revenue opportunities. ... Regulators must ensure they are stepping up efforts to sustain progress and support fintech innovation whilst also meeting their aim to keep customers safe. Work must also be done to boost public awareness of the value of open banking. Many consumers are unaware of the financial opportunities open banking offers and some remain wary of sharing their data with unknown third parties. ... Rather than duplicating efforts or competing head-to-head, institutions and fintechs should focus on co-developing shared infrastructure. When core functions like fee management, operational controls and compliance processes are unified in a central platform, fintechs can innovate on customer experience, while banks provide the stability, trust and reach. 


Data centers are eating the economy — and we’re not even using them

Building new data centers is the easy solution, but it’s neither sustainable nor efficient. As I’ve witnessed firsthand in developing compute orchestration platforms, the real problem isn’t capacity. It’s allocation and optimization. There’s already an abundant supply sitting idle across thousands of data centers worldwide. The challenge lies in efficiently connecting this scattered, underutilized capacity with demand. ... The solution isn’t more centralized infrastructure. It’s smarter orchestration of existing resources. Modern software can aggregate idle compute from data centers, enterprise servers, and even consumer devices into unified, on-demand compute pools. ... The technology to orchestrate distributed compute already exists. Some network models already demonstrate how software can abstract away the complexity of managing resources across multiple providers and locations. Docker containers and modern orchestration tools make workload portability seamless. The missing piece is just the industry’s willingness to embrace a fundamentally different approach. Companies need to recognize that most servers are idle 70%-85% of the time. It’s not a hardware problem requiring more infrastructure. 


How an AI-Based 'Pen Tester' Became a Top Bug Hunter on HackerOne

While GenAI tools can be extremely effective at finding potential vulnerabilities, XBOW's team found they were't very good at validating the findings. The trick to making a successful AI-driven pen tester, Dolan-Gavitt explained, was to use something other than an LLM to verify the vulnerabilities. In this case of XBOW, researchers used a deterministic validation approach. "Potentially, maybe in a couple years down the road, we'll be able to actually use large language models out of the box to verify vulnerabilities," he said. "But for today, and for the rest of this talk, I want to propose and argue for a different way, which is essentially non-AI, deterministic code to validate vulnerabilities." But AI still plays an integral role with XBOW's pen tester. Dolan-Gavitt said the technology uses a capture-the-flag (CTF) approach in which "canaries" are placed in the source code and XBOW sends AI agents after them to see if they can access them. For example, he said, if researchers want to find a remote code execution (RCE) flaw or an arbitrary file read vulnerability, they can plant canaries on the server's file system and set the agents loose. ... Dolan-Gavitt cautioned that AI-powered pen testers are not panacea. XBOW still sees some false positives because some vulnerabilities, like business logic flaws, are difficult to validate automatically.


Data Governance Maturity Models and Assessments: 2025 Guide

Data governance maturity frameworks help organizations assess their data governance capabilities and guide their evolution toward optimal data management. To implement a data governance or data management maturity framework (a “model”) it is important to learn what data governance maturity is, explore how and why it should be assessed, discover various maturity models and their features, and understand the common challenges associated with using maturity models. Data governance maturity refers to the level of sophistication and effectiveness with which an organization manages its data governance processes. It encompasses the extent to which an organization has implemented, institutionalized, and optimized its data governance practices. A mature data governance framework ensures that the organization can support its business objectives with accurate, trusted, and accessible data. Maturity in data governance is typically assessed through various models that measure different aspects of data management such as data quality and compliance and examine processes for managing data’s context (metadata) and its security. Maturity models provide a structured way to evaluate where an organization stands and how it can improve for a given function.


Open-source flow monitoring with SENSOR: Benefits and trade-offs

Most flow monitoring setups rely on embedded flow meters that are locked to a vendor and require powerful, expensive devices. SENSOR shows it’s possible to build a flexible and scalable alternative using only open tools and commodity hardware. It also allows operators to monitor internal traffic more comprehensively, not just what crosses the network border. ... For a large network, that can make troubleshooting and oversight more complex. “Something like this is fine for small networks,” David explains, “but it certainly complicates troubleshooting and oversight on larger networks.” David also sees potential for SENSOR to expand beyond historical analysis by adding real-time alerting. “The paper doesn’t describe whether the flow collectors can trigger alarms for anomalies like rapidly spiking UDP traffic, which could indicate a DDoS attack in progress. Adding real-time triggers like this would be a valuable enhancement that makes SENSOR more operationally useful for network teams.” ... “Finally, the approach is fragile. It relies on precise bridge and firewall configurations to push traffic through the RouterOS stack, which makes it sensitive to updates, misconfigurations, or hardware changes. 


Network Segmentation Strategies for Hybrid Environments

It's not a simple feat to implement network segmentation. Network managers must address network architectural issues, obtain tools and methodologies, review and enact security policies, practices and protocols, and -- in many cases -- overcome political obstacles. ... The goal of network segmentation is to place the most mission-critical and sensitive resources and systems under comprehensive security for a finite ecosystem of users. From a business standpoint, it's equally critical to understand the business value of each network asset and to gain support from users and management before segmenting. ... Divide the network segments logically into security segments based on workload, whether on premises, cloud-based or within an extranet. For example, if the Engineering department requires secure access to its product configuration system, only that team would have access to the network segment that contains the Engineering product configuration system. ... A third prong of segmented network security enforcement in hybrid environments is user identity management. Identity and access management (IAM) technology identifies and tracks users at a granular level based on their authorization credentials in on-premises networks but not on the cloud. 


Convergence of AI and cybersecurity has truly transformed the CISO’s role

The most significant impact of AI in security at present is in automation and predictive analysis. Automation especially when enhanced with AI, such as integrating models like Copilot Security with tools like Microsoft Sentinel allows organisations to monitor thousands of indicators of compromise in milliseconds and receive instant assessments. ... The convergence of AI and cybersecurity has truly transformed the CISO’s role, especially post-pandemic when user locations and systems have become unpredictable. Traditionally, CISOs operated primarily as reactive defenders responding to alerts and attacks as they arose. Now, with AI-driven predictive analysis, we’re moving into a much more proactive space. CISOs are becoming strategic risk managers, able to anticipate threats and respond with advanced tools. ... Achieving real-time threat detection in the cloud through AI requires the integration of several foundational pillars that work in concert to address the complexity and speed of modern digital environments. At the heart of this approach is the adoption of a Zero Trust Architecture: rather than assuming implicit trust based on network perimeters, this model treats every access request whether to data, applications, or infrastructure as potentially hostile, enforcing strict verification and comprehensive compliance controls. 


Initial Access Brokers Selling Bundles, Privileges and More

"By the time a threat actor logs in using the access and privileged credentials bought from a broker, a lot of the heavy lifting has already been done for them. Therefore, it's not about if you're exposed, but whether you can respond before the intrusion escalates." More than one attacker may use any given initial access, either because the broker sells it to multiple customers, or because a customer uses the access for one purpose - say, to steal data - then sells it on to someone else, who perhaps monetizes their purchase by further ransacking data and unleashing ransomware. "Organizations that unwittingly have their network access posted for sale on initial access broker forums have already been victimized once, and they are on their way to being victimized once again when the buyer attacks," the report says. ... "Access brokers often create new local or domain accounts, sometimes with elevated privileges, to maintain persistence or allow easier access for buyers," says a recent report from cybersecurity firm Kela. For detecting such activity, "unexpected new user accounts are a major red flag." So too is "unusual login activity" to legitimate accounts that traces to never-before-seen IP addresses, or repeat attempts that only belatedly succeed, Kela said. "Watch for legitimate accounts doing unusual actions or accessing resources they normally don't - these can be signs of account takeover."

Daily Tech Digest - July 23, 2024

Transforming GRC Landscape with Generative AI

Streamlining GRC workflows and integrating various components of the technology stack can significantly enhance efficiency. Apache Airflow is an open-source workflow automation tool that orchestrates complex data pipelines and automates GRC processes, leading to substantial efficiency gains. Apache Camel facilitates integration between different system components, ensuring smooth data flow across the technology stack. Additionally, robotic process automation (RPA) can be implemented using open-source platforms like Robot Framework. These platforms automate repetitive tasks within GRC processes, further enhancing operational efficiency and allowing human resources to focus on more strategic activities. By leveraging these open-source tools and techniques, organizations can build a robust infrastructure to support GenAI and RAG in their GRC processes, achieving enhanced efficiency, accuracy, and strategic insights. ... Traditional approaches are labour-intensive and prone to human error, leading to inefficiencies and increased compliance risks. By contrast, GenAI and RAG can streamline processes, reduce the burden on human resources, and provide timely and accurate information for strategic planning. 


Two AI Transparency Concerns that Governments Should Align On

AI raises two fundamental transparency concerns that have gained in salience with the spread of generative AI. First, the interaction with AI systems increasingly resembles human interaction. AI is gradually developing the capability of mimicking human output, as evidenced by the flurry of AI-generated content that bears similarities to human-generated content. The “resemblance concern” is thus that humans are left guessing: Is an AI system in use? Second, AI systems are inherently opaque. Humans who interact with AI systems are often in the dark about the factors and processes underlying AI outcomes. The “opacity concern” is thus that humans are left wondering: How does the AI system work? ... Regulatory divergence presents a unique opportunity for governments to learn from each other. Governments can draw from the expertise accumulated by national regulators and other governments that are experimenting to find effective AI rules. For example, governments looking to establish information rights can learn from Brazil’s precise elaboration of information to be disclosed, South Korea’s detailed procedure for requesting information, and the EU’s unique exception mechanisms.


5 IT risks CIOs should be paranoid about

CIOs sitting on mounting technical debt must turn paranoia into action plans that communicate today’s problems and tomorrow’s risks. One approach is to define and seek agreement of non-negotiables with the board and executive committee, outlining criteria of when upgrading legacy systems must be prioritized above other business objectives. ... CIOs should be drivers of change — which can create stress — while taking proactive and ongoing steps to reduce stress in their organization and across the company. The risks of burnout mount because of higher business expectations of delivering new technology capabilities, leading change management activities, and ensuring systems are operational. CIOs should promote ways to disconnect and reduce stress, such as improving communications, simplifying operations, and setting realistic objectives. ... “When considering the growing number of global third parties organizations need to collaborate with, protecting the perimeter with traditional security methods becomes ineffective the moment the data leaves the enterprise,” says Vishal Gupta, CEO & co-founder of Seclore.


Understanding the difference between competing AI architectures

A common misconception is that AI infrastructure can just be built to the NVIDIA DGX reference architecture. But that is the easy bit and is the minimum viable baseline. How far organizations go beyond that is the differentiator. AI cloud providers are building highly differentiated solutions through the application of management and storage networks that can dramatically accelerate the productivity of AI computing. ... Another important difference to note with regards AI architecture versus traditional storage models is the absence of a requirement to cache data. Everything is done by direct request. The GPUs talk directly to the disks across the network, they don't go through the CPUs or the TCP IP stack. The GPUs are directly connected to the network fabric. They bypass most of the network layers and go directly to the storage. It removes network lag. ... Ultimately, organisations should partner with a provider they can rely on. A partner that can offer guidance, provide engineering and support. Businesses using cloud infrastructure are doing so to concentrate on their own core differentiators. 


How Much Data Is Too Much for Organizations to Derive Value?

“If data is in multiple places, that is increasing your cost,” points out Chris Pierson, founder and CEO of cybersecurity company BlackCloak. Enterprises must also consider the cost of maintenance, which could include engineering and program analyst time. Beyond storage and maintenance costs, data also comes with the potential cost of risk. Threat actors constantly look for ways to access and leverage the data safeguarded by enterprises. If they are successful, and many are, enterprises face a cascade of potential costs. ... Once an enterprise is able to wrap its arms around data governance, leaders can start to ask questions about what kind of data can be deleted and when. The simple answer to the question of how much is too much boils down to value versus risk. “Start with the fundamental question: What does the company get from the data? Does it cost more to store and protect that data than the data actually provides to the organization?” says Wall. When it comes to retention, consider why data is being collected and how long it is needed. “If you don't need the data, don't collect it. That should always be the first fundamental rule,” says Pierson.


Empowering Developers in Code Security

When your team is ready to add security earlier in the development process, we suggest introducing 'guardrails' into their workflow. Guardrails, unlike wholly new processes, can slide into place unobtrusively, providing warnings about potential security issues only when they are actionable and true positives. Ideally, you want to minimize friction and enable developers to deliver safer, better code that will pass tests down the line. One tool that is almost universal across development and DevOps teams is Git. With over 97% of developers using Git daily, it is a familiar platform that can be leveraged to enhance security. Built directly into Git is an automation platform called Git Hooks, which can trigger just-in-time scanning at specific stages of the Git workflow, such as right before a commit is made. By catching issues before making a commit and providing direct feedback on how to fix them, developers can address security concerns with minimal disruption. This approach is much less expensive and time-consuming than addressing issues later in the development process. This can actually increase the time spent on new code by reducing the amount of maintenance that eventually needs to be done.


Retrieval-augmented generation refined and reinforced

RAG strengthens the application of generative AI across business segments and use cases throughout the enterprise, for example code generation, customer service, product documentation, engineering support, and internal knowledge management. ... The journey to industrializing RAG solutions presents several significant challenges along the RAG pipeline. These need to be tackled for them to be effectively deployed in real-world scenarios. Basically, a RAG pipeline consists of four standard stages — pre-retrieval, retrieval, augmentation and generation, and evaluation. Each of these stages presents certain challenges that require specific design decisions, components, and configurations. At the outset, determining the optimal chunking size and strategy proves to be a nontrivial task, particularly when faced with the cold-start problem, where no initial evaluation data set is available to guide these decisions. A foundational requirement for RAG to function effectively is the quality of document embeddings. Guaranteeing the robustness of these embeddings from inception is critical, yet it poses a substantial obstacle, just like the detection and mitigation of noise and inconsistencies within the source documents. 


Confidential AI: Enabling secure processing of sensitive data

Confidential AI is the application of confidential computing technology to AI use cases. It is designed to help protect the security and privacy of the AI model and associated data. Confidential AI utilizes confidential computing principles and technologies to help protect data used to train LLMs, the output generated by these models and the proprietary models themselves while in use. Through vigorous isolation, encryption and attestation, confidential AI prevents malicious actors from accessing and exposing data, both inside and outside the chain of execution. ... Confidential AI can also enable new or better services across a range of use cases, even those that require activation of sensitive or regulated data that may give developers pause because of the risk of a breach or compliance violation. This could be personally identifiable user information (PII), business proprietary data, confidential third-party data or a multi-company collaborative analysis. This enables organizations to more confidently put sensitive data to work, as well as strengthen protection of their AI models from tampering or theft.


Women in IT Security Lack Opportunities, Not Talent

Female leaders are also instrumental in advocating for policies and practices that promote diversity and inclusion, such as equitable hiring practices, sponsorship programs, and family-friendly policies. "By actively working to create a more inclusive environment, female cyber leaders can help pave the way for future generations of women in cybersecurity," Dohm said. ... Guenther noted that women often encounter unconscious biases that affect decisions regarding leadership potential and technical capabilities, particularly as it relates to perception bias. "Women in cybersecurity, as in many other fields, often face double standards in how their actions and words are perceived compared to their male counterparts," she said. For example, assertiveness, decisiveness, and direct communication – qualities praised in male leaders – can be unfairly labeled as aggressive or overly emotional when exhibited by women. This disparity in perception can hinder women from being seen as potential leaders or being evaluated fairly. "Addressing these biases is crucial for creating a truly equitable workplace where everyone is judged by the same standards and behaviors are interpreted consistently, regardless of gender," Guenther said.


Early IT takeaways from the CrowdStrike outage

Recovering from CrowdStrike has been an all-hands-on-deck event. In some instances, companies have needed humans to be able to touch and reboot impacted machines in order to recover — an arduous process, especially at scale. If you have outsourced IT operations to managed service providers, consider that those MSPs may not have enough staff on hand to mitigate your issues along with those of their other clients, especially when a singular event has widespread fallout. ... Ensure you review recovery steps and processes on a regular basis to guarantee that your team knows exactly where those recovery keys are and what processes are necessary to obtain them. While Bitlocker is often mandated for compliance reasons, it also adds a layer of complications you may not be prepared for. ... It was also quickly identified what the underlying culprit was, a CrowdStrike update that went faulty. In other incident situations, you may not be so quickly informed. It may not be clear what has happened and what assets have been impacted. Often, you’ll need to reach out to staff who are closely working with impacted assets to determine what is going on and what actions to take. 



Quote for the day:

"Effective questioning brings insight, which fuels curiosity, which cultivates wisdom." -- Chip Bell

Daily Tech Digest - December 14, 2023

Moral Machines: The Importance of Ethics in Generative AI

A transparent model can provide better functionality than an opaque model, as it provides users with explanations for its outputs. An opaque model does not need to explain its reasoning process, which introduces risk and potential liability if unexpected or inaccurate results are provided by a generative AI tool. This lack of visibility also makes opaque models more difficult to test than their transparent counterparts. As such, it’s important to consider generative AI tools with high transparency when working to build ethical systems. Explainability of AI models is another important aspect of creating ethical systems yet challenging to control. AI models, specifically deep learning models, use thousands upon thousands of parameters when creating an output. This type of process can be nearly impossible to track from beginning to end, which limits user visibility. Lack of explainability has already been demonstrated in real-world problems; we’ve seen many examples of AI hallucinations, such as the Bard chatbot error in February 2023, which occurs when a model provides an output that is entirely false or implausible.


12 Software Architecture Pitfalls and How to Avoid Them

Reusing an existing architecture is seldom successful unless the QARs for the new architecture match the ones that an existing architecture was designed to meet. Past performance is no guarantee of future success! Reusing part of an existing application to implement an MVP rapidly may constrain its associated MVA by including legacy technologies in its design. Extending existing components in order to reuse them may complicate their design and make their maintenance more difficult and therefore more expensive. ... Architectural work is problem-solving, with the additional skill of being able to make trade-offs informed by experience in solving particular kinds of problems. Developers who have not had experience solving architectural problems will learn, but they will make a lot of mistakes before they do. ... While new technologies offer interesting capabilities, they always come with trade-offs and unintended side effects. The new technologies don’t fundamentally or magically make meeting QARs unimportant or trivial; in many cases the ability of new technologies to meet QARs is completely unknown.


CIOs weigh the new economics and risks of cloud lock-in

“It is true that hyperscale cloud providers have hit such a critical mass that they create their own gravitational pull,” he says. “Once you adopt their cloud platforms, it can be difficult and expensive to migrate out. [But] CIOs today have more choice in cloud providers than ever. It is no longer a decision between AWS and Azure. Google has been successfully executing a strategy to attract more enterprise customers. Even Oracle has made the transition from focusing on in-house technology to become a full-service cloud provider.” CIOs may consider other approaches, McCarthy adds, such as selecting a single-tenant cloud solution offered by HPE or Dell, which bundle hardware and software in an as-a-service business model that gives CIOs more cloud options. “Another alternative includes colocation companies like Equinix, which has been offering bare-metal IaaS for several years and has now created a partnership with VMware to extend those services higher up the stack,” he says, adding that CIOs should not view a cloud provider “as a location but rather as an operating model that can be deployed in service provider data centers, on-premise, or at the edge.”


Understanding the True Cost of a Data Breach in 2023

Data breaches are common in the modern world, which means even if your organization hasn’t suffered one, the chances of it happening aren’t negligible. Criminal groups stand to profit significantly from these actions, so they are innovative and invest time and money to conduct highly advanced attacks. This means that a data breach doesn’t simply appear one second and then disappear the next. An IBM report noted the average breach cycle lasts for 287 days, with businesses taking 212 days to detect it and an additional 75 to neutralize the threat. Every organization should implement preventative measures to combat threat actors. This means building and exercising safe practices, like storing information securely, adhering to clear policies and training staff to understand data protection. Ultimately, the longer a breach continues, the more expensive it becomes. The Cost of a Data Breach Report 2023 found that companies that contain a breach within 30 days save over $1 million in contrast to those that take longer, so it pays to have a strong recovery process in place.


Fortifying confidential computing in Microsoft Azure

Adding GPU support to confidential VMs is a big change, as it expands the available compute capabilities. Microsoft’s implementation is based on Nvidia H100 GPUs, which are commonly used to train, tune, and run various AI models including computer vision and language processing. The confidential VMs allow you to use private information as a training set, for example training a product evaluation model on prototype components before a public unveiling, or working with medical data, training a diagnostic tool on X-ray or other medical imagery. Instead of embedding a GPU in a VM, and then encrypting the whole VM, Azure keeps the encrypted GPU separate from your confidential computing instance, using encrypted messaging to link the two. Both operate in their own trusted execution environments (TEE), ensuring that your data remains secure. Conceptually this is no different from using an external GPU over Thunderbolt or another PCI bus. Microsoft can allocate GPU resources as needed, with the GPU TEE ensuring that its dedicated memory and configuration are secured.


From reactive to proactive: Always-ready CFD data center analysis

By synchronizing with these toolsets, digital twin models can pull all relevant, necessary data and update accordingly. The data includes objects on the floor plan, assets in the racks, power chain connections, historical power, and environmental readings, and perforated tile and return grate locations. Therefore, the digital twin model is always ready to run the next predictive scenario with current data and minimal supervision from the operational team. As part of the routine output from the software, DataCenter Digital Twin produces Excel-ready reports, capacity dashboards, CFD reports, and go/no-go planning analysis. Teams can then use this information to evaluate future capacity plans, conduct sensitivity studies (such as redundant failure or transient power failure), and run energy optimization studies as needed. Much of this functionality is available through an intuitive and accessible web portal. We know that every organization has a unique set of problems, priorities, and workflows. As such, we’ve split DataCenter Insight Platform into two offerings – DataCenter Asset Twin and DataCenter Digital Twin.


AI-Powered Encryption: A New Era in Cybersecurity

AI-powered encryption represents a groundbreaking advancement in cybersecurity, leveraging the capabilities of artificial intelligence to strengthen data protection. At its core, AI-powered encryption utilizes machine learning algorithms to continuously analyze and adapt to new cyber threats, making it an incredibly dynamic and proactive defense mechanism. By employing AI-driven pattern recognition and predictive analytics, this encryption method can rapidly identify potential vulnerabilities and create tailored encryption protocols to thwart would-be attackers. One key aspect of AI-powered encryption is its ability to autonomously adjust security parameters in real-time based on evolving risk factors. This adaptability ensures that data remains secure even as cyber threats become more sophisticated. Moreover, the integration of AI enables encryption systems to swiftly detect anomalies or suspicious activities within the network, providing an extra layer of defense against unauthorized access or data breaches. 


7 Best Practices for Developers Getting Started with GenAI

Experiment (and encourage your team to experiment) with GenAI tools and code-gen solutions, such as GitHub Copilot, which integrates with every popular IDE and acts as a pair programmer. Copilot offers programmers suggestions, helps troubleshoot code and generates entire functions, making it faster and easier to learn and adapt to GenAI. A word of warning when you first use these off-the-shelf tools: Be wary of using proprietary or sensitive company data, even when just feeding the tool a prompt. Gen AI vendors may store and use your data for use in future training runs, a major no-no for your company’s data policy and info-security protocol. ... One of the first steps to deploying GenAI well is to master writing prompts, which is both an art and a science. While prompt engineer is an actual job description, it’s also a good moniker for anyone looking to improve their use of AI. A good prompt engineer knows how to develop, refine and optimize text prompts to get the best results and improve the overall AI system performance. Prompt engineering doesn’t require a particular degree or background, but those doing it need to be skilled at explaining things well. 


Could Your Organization Benefit from Hyperautomation?

Building a sophisticated hyperautomation ecosystem requires a significant technology investment, Manders says. “Additionally, the integration of multiple technologies and tools, inherent in hyperautomation, can usher in increased complexity, making ecosystem maintenance a challenging endeavor.” Failing to establish clear goal and governance guidelines can also create serious challenges. Automation without governance could lead individual departments to create their own automation processes, which may conflict with other departments’ processes. The resulting hyperautomation silos could lead to some departments failing to take advantage of solutions fellow departments have already deployed. Additionally, every time an organization transports data to another process or platform, there’s the risk of data leaks. “If we don’t follow best practices and ensure that data is secure, this information could fall into the wrong hands,” Rahn warns. Hyperautomation may also lead adopters to dependency on a particular vendor’s ecosystem of tools and technologies. 


How insurtech is using artificial intelligence

As insurers look to become more customer centric, the coupling of AI with advanced analytics can help provide a more specific, personalised and real-time picture of insurance customers. With insurance customers coming to rely on online platforms for purchasing and managing their policies for such a particular commodity, interactions with the firms themselves are few and far and between, which can water down the user experience. However, experience orchestration — the leveraging of customer data and AI by insurance companies to create highly personalised interactions — can be implemented to improve relations long-term. Manan Sagar, global head of insurance at Genesys, explains ... This approach not only improves the customer experience but also enhances employee efficiency by automating tasks or routing calls more effectively. “As the insurance industry navigates the digital age, experience orchestration can serve as a powerful tool to uphold the tradition of trust and personal relationships that have long defined the industry. Through this, firms can differentiate themselves in an increasingly commoditised market and ensure their customers remain loyal and satisfied.”



Quote for the day:

"A true leader always keeps an element of surprise up his sleeve which others cannot grasp but which keeps his public excited and breathless." -- Charles de Gaulle

Daily Tech Digest - December 04, 2023

Proactive, not reactive: the path to ensuring operational resilience in cybersecurity

Operational resilience goes beyond ensuring business continuity by mitigating disruptions as and when they occur. Resilience needs a proactive approach to maintaining stable and reliable digital systems, regardless of the severity of threat incidents. This "bankability" (excuse the pun) of the financial system is critical to preserving public trust and confidence in the global financial system. Given the interconnectedness of financial firms with external third parties, any plan for operational resilience needs to address multiple lines of communication, automated systems of interactions and information sharing, and a growing attack surface. ... The dependence of the financial sector on the telecom and energy industries, and the increasingly global nature of the sector means that operational resilience exercises need to not just be cross-border, but cross-sector too. Today, national or even global-level threats are a reality, emphasizing the need to include government partners in the exercises. After all, protecting critical private infrastructure safeguards a nation's financial stability.


Black-Box, Gray Box, and White-Box Penetration Testing: Importance and Uses

Grey-box penetration testing can simulate advanced persistent threat (APT) scenarios in which the attacker is highly sophisticated and operates on a longer time scale (CISA, 2023). In these types of attacks, the threat actor has collected a good deal of information about the target system—similar to a gray-box testing scenario. Grey-box penetration testing allows many organizations to strike the right balance between white-box and black-box testing. ... The main disadvantage of gray-box testing is that it can be too “middle-of-the-road” when compared with black-box or white-box testing. If organizations do not strike the right balance during gray-box testing, they may miss crucial insights that would have been found with a different technique. ... Black box, grey box, and white box testing are all valuable forms of penetration testing, each with its own pros, cons, and use cases. Penetration testers need to be familiar with the importance and use cases of each type of test to execute them most efficiently, using the right tools for each one.


The arrival of genAI could cover critical skills gaps, reshape IT job market

While genAI offers the promise of clear business benefits, education is key and collaboration with cybersecurity and risk experts is needed to help establish an environment where the technology can be used safely, securely, and productively, according to Emm. Hurdles to adopting AI persist. Those issues include high costs, uncertain return on investment (ROI), the need to upskill entire staffs, and potential exposure of sensitive corporate data to unfamiliar automation technology. Few organizations, however, have put appropriate safeguards in place to guard against some of genAI's most well-known flaws, such as hallucinations, exposure of corporate data, and data errors. Most are leaving themselves wide open to the acknowledged risks of using genAI, according to Kaspersky. For example, only 22% of C-level executives have discussed putting rules in place to regulate the use of genAI in their organizations — even as they eye it as a way of closing the skills gap. Cisco CIO Fletcher Previn, whose team is working to embed AI in back-end systems and products, said it's critical to have the policies, security, and legal guardrails in place to be able to "safely adopt and embrace AI capabilities other vendors are rolling out into other people’s tools.


State of Serverless Computing and Event Streaming in 2024

Traditional stream processing usually involves an architecture with many moving parts managing distributed infrastructure and using a complex stream processing engine. For instance, Apache Spark, one of the most popular processing engines, is notoriously difficult to deploy, manage, tune and debug (read more about the good, bad and ugly of using Spark). Implementing a reliable, scalable stream processing capability can take anywhere between a few days and a few weeks, depending on the use case. On top of that, you also need to deal with continuous monitoring, maintenance and optimization. You may even need a dedicated team to handle this overhead. All in all, traditional stream processing is challenging, expensive and time consuming. In contrast, serverless stream processing eliminates the headache of managing a complex architecture and the underlying infrastructure. It’s also more cost effective, since you pay only for the resources you use. It’s natural that serverless stream processing solutions have started to appear. 


The Glaring Gap in Your Cybersecurity Posture: Domain Security

Because domain names are used for marketing and brand initiatives, security teams may feel that protecting online domain names falls under the marketing or legal side of the business. Or, they may have left domain protection in the hands of their IT department. But, if organizations are unfamiliar with who their domain registrars even are, chances are they are unaware of the policies the registrars use and the security measures they have in place for branded, trademarked domains. Domain security should be an essential branch of cybersecurity, protecting brands online, but it is not always the highest priority for consumer-grade domain registrars. Unfortunately, adversaries are privy to the growth in businesses’ online presence and the often minimal attention given to domain security, leading them to take a special interest in targeting corporate and/or government domain names that are left exposed. Organizations will continue to find themselves in the path of a perfect storm for domain and DNS attacks and potential financial or reputational devastation if they continue to allow the build-up of blind spots in their security posture.


Put guardrails around AI use to protect your org, but be open to changes

While a seasoned CISO might recognize that the output from ChatGPT in response to a simple security question is malicious, it’s less likely that another member of staff will have the same antenna for risk. Without regulations in place, any employee could be inadvertently stealing another company’s or person’s intellectual property (IP), or they could be delivering their own company’s IP into an adversary’s hands. Given that LLMs store user input as training data, this could contravene data privacy regulations, including GDPR. Developers are using LLMs to help them write code. When this is ingested, it can reappear in response to a prompt from another user. There is nothing that the original developer can do to control this because the LLM was used to help create the code, making it highly unlikely that they can prove ownership of it. This might be mitigated by using a GenAI license which helps enterprises to guard against their code being used as an input for training. However, in these circumstances, imposing a “trust but verify” approach is a good idea.


Why Generative AI Threatens Hospital Cybersecurity — and How Digital Identity Can Be One of Its Greatest Defenses

Writing convincing deceptive messages isn’t the only task cyber attackers use ChatGPT for. The tool can also be prompted to build mutating malicious code and ransomware by individuals who know how to circumvent its content filters. It’s difficult to detect and surprisingly easy to pull off. Ransomware is particularly dangerous to healthcare organizations as these attacks typically force IT staff to shut down entire computer systems to stop the spread of the attack. When this happens, doctors and other healthcare professionals must go without crucial tools and shift back to using paper records, resulting in delayed or insufficient care which can be life-threatening. Since the start of 2023, 15 healthcare systems operating 29 hospitals have been targeted by a ransomware incident, with data stolen from 12 of the 15 healthcare organizations affected. This is a serious threat that requires serious cybersecurity solutions. And generative AI isn’t going anywhere — it’s only picking up speed. It is imperative that hospitals lay thorough groundwork to prevent these tools from giving bad actors a leg up.


15 Essential Data Mining Techniques

The essence of data mining lies in the fundamental technique of tracking patterns, a process integral to discerning and monitoring trends within data. This method enables the extraction of intelligent insights into potential business outcomes. For instance, upon identifying a sales trend, organizations gain a foundation for taking strategic actions to leverage this newfound insight. When it’s revealed that a specific product outperforms others within a particular demographic, this knowledge becomes a valuable asset. Organizations can then capitalize on this information by developing similar products or services tailored to the demographic or by optimizing the stocking strategy for the original product to cater to the identified consumer group. In the realm of data mining, classification techniques play a pivotal role by scrutinizing the diverse attributes linked to various types of data. By discerning the key characteristics inherent in these data types, organizations gain the ability to systematically categorize or classify related data. This process proves crucial in the identification of sensitive information


SolarWinds lawsuit by SEC puts CISOs in the hot seat

Without ongoing, open dialogue between these leaders, it’s impossible to guarantee complete awareness of the range of complications associated with potential cyber risks. Now that we’ve seen how these risks can easily extend beyond security concerns and into catastrophic financial and legal issues, it’s important that conversations about these risks are not taking place exclusively among CISOs. The roles and responsibilities of CISOs and other C-Suite executives vary dramatically, which can naturally result in siloed processes and priorities. However, to ensure alignment and effectively protect an organization from data breaches and legal recourse alike, it’s imperative that business leaders learn to “speak the same language” and share information to align their efforts and goals. CFOs and CISOs must collaborate to evaluate the relationships between cybersecurity incidents and legal risks. We can facilitate this by leveraging cyber risk quantification and management tools, which congregate data to calculate, quantify and translate information about threats and vulnerabilities into lay terms and easily digestible data.


CTO interview: Greg Lavender, Intel

“Our confidential computing capability is also a privacy-ensuring capability,” says Lavender. “Europe is ahead in this area, with the notion of sovereign clouds. Intel partners with some of the European governments on sovereign cloud using Intel’s platforms for confidential computing. The privacy-preserving capabilities are built into these platforms, which beyond government, will also be useful in regulated industries like financial services, healthcare and telcos.” “We also see a convergence in AI that will open up a big market for our privacy-ensuring software and hardware,” says Lavender. “You spend a lot of time prepping your data, tagging your data, getting your data ready for training, usage or inference usage. You want to do that securely in a multi-tenant environment. Our platforms give you the opportunity to do your training securely between the CPU and the GPU, and then you can deploy it securely in the cloud or at the edge.” “I’m talking with a lot of CIOs about this technology, because data is now such a valuable thing. It’s what you use to train your models. You don’t want somebody else to get access to that data because then they can use it to train their models and offer competing services.”



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - January 15, 2023

How confidential computing will shape the next phase of cybersecurity

At its core, confidential computing encrypts data at the hardware level. It’s a way of “protecting data and applications by running them in a secure, trusted environment,” explains Noam Dror—SVP of solution engineering at HUB Security, a Tel Aviv, Israel-based cybersecurity company that specializes in confidential computing. In other words, confidential computing is like running your data and code in an isolated, secure black box, known as an “enclave” or trusted execution environment (TEE), that’s inaccessible to unauthorized systems. The enclave also encrypts all the data inside, allowing you to process your data even when hackers breach your infrastructure. Encryption makes the information invisible to human users, cloud providers, and other computer resources. Encryption is the best way to secure data in the cloud, says Kurt Rohloff, cofounder and CTO at Duality, a cybersecurity firm based in New Jersey. Confidential computing, he says, allows multiple sources to analyze and upload data to shared environments, such as a commercial third-party cloud environment, without worrying about data leakage.


Not All Multi-Factor Authentication Is Created Equal

Many legacy MFA platforms rely on easily phishable factors like passwords, push notifications, one-time codes, or magic links delivered via email or SMS. In addition to the complicated and often frustrating user experience they create, phishable factors such as these open organizations up to cyber threats. Through social engineering attacks, employees can be easily manipulated into providing these authentication factors to a cyber criminal. And by relying on these factors, the burden to protect digital identities lies squarely on the end user, meaning organizations’ cybersecurity strategies can hinge entirely on a moment of human error. Beyond social engineering, man-in-the middle attacks and readily available toolkits make bypassing existing MFA a trivial exercise. Where there is a password and other weak and phishable factors, there is an attack vector for hackers, leaving organizations to suffer the consequences of account takeovers, ransomware attacks, data leakage, and more. A phishing-resistant MFA solution completely removes these factors, making it impossible for an end user to be tricked into handing them over even by accident or collected by automated phishing tactics.


Europe’s cyber security strategy must be clear about open source

While the UK government has tried to recognise the importance of digital supply chain security, current policy doesn’t consider open source as part of that supply chain. Instead, regulation or proposed policies focus only on third-party software vendors in the traditional sense but fail to recognise the building blocks of all software today and the supply chain behind it. To hammer the point, the UK’s 11,000+ word National Cyber Security Strategy does not include a single reference to open source. GCHQ guidance meanwhile remains limited, with little detailed direction beyond ‘pull together a list of your software’s open source components or ask your suppliers.’ ... In this sense, the EU has certainly been listening. The recently released Cyber Resilience Act (CRA) is its proposed regulation to combat threats affecting any digital entity and ‘bolster cyber security rules to ensure more secure hardware and software products’. First, the encouraging bits: the CRA doesn’t just call for vendors and producers of software to have (among other things) a Software Bill of Materials (SBoM) - it demands companies have the ability to recall components. 


Eight Common Data Strategy Pitfalls

Lack of data culture: Data hidden within silos with little communication between business units leads to a lack of data culture. Data Literacy and enterprise-wide data training is required to allow business staff to read, analyze, and discuss data. Data culture is the starting point for developing an effective Data Strategy.The Data Strategy is too focused on data and not on the business side of things: When businesses focus too much on just data, the Data Strategy may just end up serving the needs of analytics without any focus on business needs. An ideal Data Strategy enlists human capabilities and provides opportunities for training staff to carry out the strategy to meet business goals. This approach will work better if citizen data scientists are included in strategy teams to bridge the gap between the data scientist and the business analyst.Investing in data technology before democratizing data: In many cases, Data Strategy initiatives focus on quick investment in technology without first addressing data access issues. If data access is not considered first, costly technology investments will go to waste. 


Here's Why Your Data Science Project Failed (and How to Succeed Next Time)

Every data science project needs to start with an evaluation of your primary goals. What opportunities are there to improve your core competency? Are there any specific questions you have about your products, services, customers, or operations? And is there a small and easy proof of concept you can launch to gain traction and master the technology? The above use case from GE is a prime example of having a clear goal in mind. The multinational company was in the middle of restructuring, re-emphasizing its focus on aero engines and power equipment. With the goal of reducing their six- to 12-month design process, they decided to pursue a machine learning project capable of increasing the efficiency of product design within their core verticals. As a result, this project promises to decrease design time and budget allocated for R&D. Organizations that embody GE's strategy will face fewer false starts with their data science projects. For those that are still unsure about how to adapt data-driven thinking to their business, an outsourced partner can simplify the selection process and optimize your outcomes.


5 Skills That Make a Successful Data Manager

The role of a data manager in an organization is tricky. This person is often neither an IT guy who implements databases on his/her own, nor a business guy who is actually responsible for data or processes (that’s rather a Data Steward’s area of responsibility). So what’s the real value-add of a data manager (or even a data management department)? In my opinion, you need someone who is building bridges between the different data stakeholders on a methodical level. It’s rather easy to find people who consider themselves as experts for a particular business area, data analysis method or IT tool, but it is rather complicated to find one person who is willing to connect all these people and to organize their competencies as it is often required in data projects. So what I am referring to are skills like networking, project management, stakeholder management and change management HIwhich are required to build a data community step-by-step as backbone for Data Governance. Without people, a data manager will fail! So in my opinion, a recruiter who seeks for data managers should not only challenge technical skills but also these people skills.


Why distributed ledger technology needs to scale back its ambition

There is nonetheless an expectation that DLT can prove to be a net good for financial markets. Foreign exchange markets have an estimated $8.9 trillion at risk every day due to the final settlement of transactions between two parties taking days. This is why the Financial Stability Board and the Committee on Payments and Market Infrastructures have focused their efforts on enhancing cross-border payments with a comprehensive global roadmap. Part of this roadmap includes exploring the use of DLT and Central Bank Digital Currencies. The problem may not be the technology itself, but the aim of replacing current technology systems with distributed networks. DLT networks are being designed to completely overhaul and replace legacy technology that financial markets depend on today. Many pilot projects, such as mBridge and Jura, rely on a single blockchain developed by a single vendor. This introduces a single point of trust, and removes many of the benefits of disintermediation. 


Why is “information architecture” at the centre of the design process?

The information architecture within a design (both process and output) makes the balancing within the equation possible. It also ensures the equation is “solvable” by other people. It does this by introducing logical coherence. It ensures words, images, shapes and colours are used consistently. And it ensures that as we move from idea to execution, we stay true to the original intent — and can clearly articulate it — so that we can meaningfully measure the effectiveness of our design. Without this internal coherence and confidence that our output is an accurate, reliable test of our hypothesis, we’re not doing design. The power of design which has a consistent information architecture is that if we find that our idea (which we translate to intent, experiments and experiences) is not equal to the problem, we can interrogate every part of the equation. We may have made a mistake in execution. Maybe our idea wasn’t quite right. Or even more powerfully, maybe we didn’t really understand the problem fully. 


Improve Your Software Quality with a Strong Digital Immune System

You can improve your software quality with a strong digital immune system since a digital immune system is designed to guard against cyberattacks and other sorts of hostile activities on computer systems, networks, and hardware. It operates by constantly scanning the network and systems for indications of prospective threats and then taking the necessary precautions to thwart or lessen such dangers. This can entail detecting and preventing malicious communications, identifying and containing compromised devices, and patching security holes. A robust digital immune system should offer powerful and efficient protection against cyber threats and assist individuals and companies in staying secure online. Experts in software engineering are searching for fresh methods and strategies to reduce risks and maximize commercial impact. The idea of “digital immunity” offers a direction. It consists of a collection of techniques and tools for creating robust software programmes that provide top-notch user experiences. With the help of this roadmap, software engineering teams may identify and address a wide range of problems, including functional faults, security flaws, and inconsistent data.


Security Bugs Are Fundamentally Different Than Quality Bugs

For each one of the types of testing listed above, a different skillset is required. All of them require patience, attention to detail, basic technical skills, and the ability to document what you have found in a way that the software developers will understand and be able to fix the issue(s). That is where the similarities end. Each one of these types of testing requires different experience, knowledge, and tools, often meaning you need to hire different resources to perform the different tasks. Also, we can’t concentrate on everything at once and still do a great job at each one of them. Although theoretically you could find one person who is both skilled and experienced in all of these areas, it is rare, and that person would likely be costly to employ as a full-time resource. This is one reason that people hired for general software testing are not often also tasked with security testing. Another reason is that people who have the experience and skills to perform thorough and complete security testing are currently a rarity. 



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls