Daily Tech Digest - March 19, 2025


Quote for the day:

“The only true wisdom is knowing that you know nothing.” -- Socrates


How AI is Becoming More Human-Like With Emotional Intelligence

The concept of humanizing AI is designing systems that can understand, interpret, and respond to human emotions in a way that feels more natural. It is making the AI efficient enough to pick up cues to read the room and react as a human would but in a polished way. ... It is only natural that a potential user will prefer to interact with someone who acknowledges the queries and engages with them like a human. AI that sounds and responds like a human helps build trust and rapport with users. ... AI that adapts based on mood and tone. You cannot keep sending automated messages to your users, especially to the ones who are irate. AI that sounds and responds like a human helps build trust and rapport with users ... The humanization of AI makes AI accessible and inclusive to all. Voice assistants and screen readers, AI-powered speech-to-text, and text-to-speech tools are some great examples of these fleets. ... As AI becomes more aware and powerful there are rising concerns about its ethical usage. There have to be checks in place that ensure AI doesn’t blatantly mimic human emotions to exploit users’ feelings. There should be a trigger warning for the users to know that they are dealing with machine-generated content. Businesses must ensure ethical AI development, prioritizing user trust and transparency systems should be programmed to respect user privacy and not manipulate users into making purchases or conversions.


Beyond Trends: A Practical Guide to Choosing the Right Message Broker

In distributed systems, messaging patterns define how services communicate and process information. Each pattern comes with unique requirements, such as ordering, scalability, error handling, or parallelism, which guide the selection of an appropriate message broker. ... The Event-Carried State Transfer (ECST) pattern is a design approach used in distributed systems to enable data replication and decentralized processing. In this pattern, events act as the primary mechanism for transferring state changes between services or systems. Each event includes all the necessary information (state) required for other components to update their local state without relying on synchronous calls to the originating service. By decoupling services and reducing the need for real-time communication, ECST enhances system resilience, allowing components to operate independently even when parts of the system are temporarily unavailable. ... The Event Notification Pattern enables services to notify other services of significant events occurring within a system. Notifications are lightweight and typically include just enough information (e.g., an identifier) to describe the event. To process a notification, consumers often need to fetch additional details from the source (and/or other services) by making API calls. 


Successful AI adoption comes down to one thing: Smarter, right-size compute

A common perception in the enterprise is that AI solutions require a massive investment right out of the gate, across the board, on hardware, software and services. That has proven to be one of the most common barriers to adoption — and an easy one to overcome, Balasubramanian says. The AI journey kicks off with a look at existing tech and upgrades to the data center; from there, an organization can start scaling for the future by choosing technology that can be right-sized for today’s problems and tomorrow’s goals. “Rather than spending everything on one specific type of product or solution, you can now right-size the fit and solution for the organizations you have,” Balasubramanian says. “AMD is unique in that we have a broad set of solutions to meet bespoke requirements. We have solutions from cloud to data center, edge solutions, client and network solutions and more. ... While both hardware and software are crucial for tackling today’s AI challenges, open-source software will drive true innovation. “We believe there’s no one company in this world that has the answers for every problem,” Balasubramanian says. “The best way to solve the world’s problems with AI is to have a united front, and to have a united front means having an open software stack that everyone can collaborate on. ...”


CDOs: Your AI is smart, but your ESG is dumb. Here’s how to fix it

Embedding sustainability into a data strategy requires a deliberate shift in how organizations manage, govern and leverage their data assets. CDOs must ensure that sustainability considerations are integrated into every phase of data decision-making rather than treating ESG as an afterthought or compliance requirement. A well-designed strategy can help organizations balance business growth with environmental, social and governance (ESG) responsibility while improving operational efficiency. ... Advanced analytics and AI can unlock new opportunities for sustainability. Predictive modeling can help companies optimize energy consumption, while AI-driven insights can identify supply chain inefficiencies that lead to excessive waste. For example, retailers are leveraging AI-powered demand forecasting to reduce overproduction and excess inventory, significantly cutting down carbon emissions and waste.  ... Creating a sustainability-focused data culture requires education and engagement across all levels of the organization. CDOs can implement ESG-focused data literacy programs to ensure that business leaders, data scientists and engineers understand the impact of their work on sustainability. Encouraging collaboration between data teams and sustainability departments ensures ESG considerations remain a priority throughout the data lifecycle.


Five Critical Shifts for Cloud Native at a Crossroads

General-purpose operating systems can become a Kubernetes bottleneck at scale. Traditional OS environments are designed for a wide range of use cases, carry unnecessary overhead and bring security risks when running cloud native workloads. Enterprises are increasingly instead turning to specialized operating systems that are purpose-built for Kubernetes environments, finding that this shift has advantages across security, reliability and operational efficiency. The security implications are particularly compelling. While traditional operating systems leave many potential entry points exposed, specialized cloud native operating systems take a radically different approach. ... Cost-conscious organizations (Is there another kind?) are discovering that running Kubernetes workloads solely in public clouds isn’t always the best approach. Momentum has continued to grow toward pursuing hybrid and on-premises strategies for greater control over both costs and capabilities. This shift isn’t just about cost savings, it’s about building infrastructure precisely tailored to specific workload requirements, whether that’s ultra-low latency for real-time applications or specialized configurations for AI/machine learning workloads.


Moving beyond checkbox security for true resilience

A threat-informed and risk-based approach is paramount in an era of perpetually constrained cybersecurity budgets. Begin by assessing the organization’s crown jewels – sensitive customer data, intellectual property, financial records, or essential infrastructure. These assets represent the core of the organization’s value and should demand the highest priority in protection.... Organizations frequently underestimate the risks from unmanaged devices, also called shadow IT, and within their software supply chain. As reliance on third-party software and libraries embedded within the organization and in-house apps deepens, the attack surface becomes a constantly shifting landscape with hidden vulnerabilities. Unmanaged devices and unauthorized applications are equally problematic and can introduce unexpected and substantial risks. To address these blind spots, organizations must implement rigorous vendor risk management programs, track IT assets, and enforce application control policies. These often-overlooked elements create critical blind spots, allowing attackers to exploit vulnerabilities that existing security measures might miss. ... Regardless of the trends, CISOs should assess the specific threats relative to their organization and ensure that foundational security measures are in place.


How to simplify app migration with generative AI tools

Reviewing existing documentation and interviewing subject matter experts is often the best starting point to prepare for an application migration. Understanding the existing system’s business purposes, workflows, and data requirements is essential when seeking opportunities for improvement. This outside-in review helps teams develop a checklist of which requirements are essential to the migration, where changes are needed, and where unknowns require further discovery. Furthermore, development teams should expect and plan a change management program to support end users during the migration. ... Technologists will also want to do an inside-out analysis, including performing a code review, diagraming the runtime infrastructure, conducting a data discovery, and analyzing log files or other observability artifacts. Even more important may be capturing the dependencies, including dependent APIs, third-party data sources, and data pipelines. This architectural review can be time-consuming and often requires significant technical expertise. Using genAI can simplify and accelerate the process. “GenAI is impacting app migrations in several ways, including helping developers and architects answer questions quickly regarding architectural and deployment options for apps targeted for migration,” says Rob Skillington, CTO & co-founder of Chronosphere.


How to Stop Expired Secrets from Disrupting Your Operations

Unlike human users, the credentials used by NHIs often don’t receive expiration reminders or password reset prompts. When a credential quietly reaches the end of its validity period, the impact can be immediate and severe: application failures, broken automation workflows, service downtime, and urgent security escalations. And unlike the food in your fridge, there’s no nosy relative to point out that your secrets have gone bad. ... While TLS/SSL certificate expiration often gets the most attention due to its visible impact on websites, many types of machine credentials have built-in expiration. API keys silently time out in backend services, OAuth tokens reach their limits, IAM role sessions terminate, Kubernetes service account tokens expire, and database connection credentials become invalid. ... The primary consequence of an expired credential is a failed authentication attempt. At first glance, this might seem like a simple fix – just replace the credential and restart the service. But in reality, identifying and resolving an expired credential issue is rarely straightforward. Consider a cloud-native application that relies on multiple APIs, internal microservices, and external integrations. If an API key or OAuth token used by a backend service expires, the application might return unexpected errors, time out, or degrade in ways that aren’t immediately obvious. 


Role of Interconnects in GenAI

The emergence of High-Performance Computing (HPC) demanded a leap in interconnect capabilities. InfiniBand entered the scene, offering significantly higher throughput and lower latency compared to existing technologies. It became the cornerstone of data centers and large-scale computing environments, enabling the rapid exchange of massive datasets required for complex simulations and scientific computations. Simultaneously, the introduction of Peripheral Component Interconnect Express (PCIe) revolutionized off-chip communication. ... the scalability of GenAI models, particularly large language models, relies heavily on robust interconnects. These systems facilitate the distribution of computational load across multiple processors and machines, enabling the training and deployment of increasingly complex models. This scalability is achieved through efficient network topologies that minimize communication bottlenecks, allowing for both vertical and horizontal scaling. Parallel processing, a cornerstone of GenAI training, is also dependent on effective interconnects. Model and data parallelism require seamless communication and synchronization between processors working on different segments of data or model components. Interconnects ensure that these processors can exchange information efficiently, maintaining consistency and accuracy throughout the training process.


That breach cost HOW MUCH? How CISOs can talk effectively about a cyber incident’s toll

Many CISOs struggle to articulate the financial impact of cyber incidents. “The role of a CISO is really interesting and uniquely challenging because they have to have one foot in the technical world and one foot in the executive world,” Amanda Draeger, principal cybersecurity consultant at Liberty Mutual Insurance, tells CSO. “And that is a difficult challenge. Finding people who can balance that is like finding a unicorn.” ... Quantifying the costs of an incident in advance is an inexact art greatly aided by tabletop exercises. “The best way in my mind to flush all of this out is by going through a regular incident response tabletop exercise,” Gary Brickhouse, CISO at GuidePoint Security, tells CSO. “People know their roles so that when it does happen, you’re prepared.” It also helps to develop an incident response (IR) plan and practice it frequently. “I highly recommend having an incident response plan that exists on paper,” Draeger says. “I mean literal paper so that when your entire network explodes, you still have a list of phone numbers and contacts and something to get you started.” Not only does the incident response plan lead to better cost estimates, but it will also lead to a quicker return of network functions. “Practice, practice, practice,” Draeger says. 

Daily Tech Digest - March 17, 2025


Quote for the day:

"The leadership team is the most important asset of the company and can be its worst liability" -- Med Jones


Inching towards AGI: How reasoning and deep research are expanding AI from statistical prediction to structured problem-solving

There are various scenarios that could emerge from the near-term arrival of powerful AI. It is challenging and frightening that we do not really know how this will go. New York Times columnist Ezra Klein addressed this in a recent podcast: “We are rushing toward AGI without really understanding what that is or what that means.” For example, he claims there is little critical thinking or contingency planning going on around the implications and, for example, what this would truly mean for employment. Of course, there is another perspective on this uncertain future and lack of planning, as exemplified by Gary Marcus, who believes deep learning generally (and LLMs specifically) will not lead to AGI. Marcus issued what amounts to a take down of Klein’s position, citing notable shortcomings in current AI technology and suggesting it is just as likely that we are a long way from AGI. ... While each of these scenarios appears plausible, it is discomforting that we really do not know which are the most likely, especially since the timeline could be short. We can see early signs of each: AI-driven automation increasing productivity, misinformation that spreads at scale, eroding trust and concerns over disingenuous models that resist their guardrails. Each scenario would cause its own adaptations for individuals, businesses, governments and society.


AI in Network Observability: The Dawn of Network Intelligence

ML algorithms, trained on vast datasets of enriched, context-savvy network telemetry, can now detect anomalies in real-time, predict potential outages, foresee cost overruns, and even identify subtle performance degradations that would otherwise go unnoticed. Imagine an AI that can predict a spike in malicious traffic based on historical patterns and automatically trigger mitigations to block the attack and prevent disruption. That’s a straightforward example of the power of AI-driven observability, and it’s already possible today. But AI’s role isn’t limited to number crunching. GenAI is revolutionizing how we interact with network data. Natural language interfaces allow engineers to ask questions like: “What’s causing latency on the East Coast?” and receive concise, insightful answers. ... These aren’t your typical AI algorithms. Agentic AI systems possess a degree of autonomy, allowing them to make decisions and take actions within a defined framework. Think of them as digital network engineers, initially assisting with basic tasks but constantly learning and evolving, making them capable of handling routine assignments, troubleshooting fundamental issues, or optimizing network configurations.


Edge Computing and the Burgeoning IoT Security Threat

A majority of IoT devices come with wide-open default security settings. The IoT industry has been lax in setting and agreeing to device security standards. Additionally, many IoT vendors are small shops that are more interested in rushing their devices to market than in security standards. Another reason for the minimal security settings on IoT devices is that IoT device makers expect corporate IT teams to implement their own device settings. This occurs when IT professionals -- normally part of the networking staff -- manually configure each IoT device with security settings that conform with their enterprise security guidelines. ... Most IoT devices are not enterprise-grade. They might come with weak or outdated internal components that are vulnerable to security breaches or contain sub-components with malicious code. Because IoT devices are built to operate over various communication protocols, there is also an ever-present risk that they aren't upgraded for the latest protocol security. Given the large number of IoT devices from so many different sources, it's difficult to execute a security upgrade across all platforms. ... Part of the senior management education process should be gaining support from management for a centralized RFP process for any new IT, including edge computing and IoT. 


Data Quality Metrics Best Practices

While accuracy, consistency, and timeliness are key data quality metrics, the acceptable thresholds for these metrics to achieve passable data quality can vary from one organization to another, depending on their specific needs and use cases. There are a few other quality metrics, including integrity, relevance, validity, and usability. Depending on the data landscape and use cases, data teams can select the most appropriate quality dimensions to measure. ... Data quality metrics and data quality dimensions are closely related, but aren’t the same. The purpose, usage, and scope of both concepts vary too. Data quality dimensions are attributes or characteristics that define data quality. On the other hand, data quality metrics are values, percentages, or quantitative measurements of how well the data meets the above characteristics. A good analogy to explain the differences between data quality metrics and dimensions would be the following: Consider data quality dimensions as talking about a product’s attributes – it’s durable, long-lasting, or has a simple design. Then, data quality metrics would be how much it weighs, how long it lasts, and the like. ... Every solution starts with a problem. Identify the pressing concerns – missing records, data inconsistencies, format errors, or old records. What is it that you are trying to solve? 


How to Modernize Legacy Systems with Microservices Architectures

Scalability and agility are two significant benefits of a microservices architecture. With monolithic applications, it's difficult to isolate and scale distinct application functions under variable loads. Even if a monolithic application is scaled to meet increased demand, it could take months of time and capital to reach the end goal. By then, the demand might have changed —or disappear altogether — and the application will waste resources, bogging down the larger operating system. ... microservices architectures make applications more resilient. Because monolithic applications function on a single codebase, a single error during an update or maintenance can create large-scale problems. Microservices-based applications, however, work around this issue. Because each function runs on its own codebase, it's easier to isolate and fix problems without disrupting the rest of the application's services. ... Microservices might seem like a one-size-fits-all, no-downsides approach to modernizing legacy systems, but the first step to any major system migration is to understand the pros and cons. No major project comes without challenges, and migrating to microservices is no different. For instance, personnel might be resistant to changes associated with microservices. 


Elevating Employee Experience: Transforming Recognition with AI

AI’s ability to analyse patterns in behaviour, performance, and preferences enables organisations to offer personalised recognition that resonates with employees. AI-driven platforms provide real-time insights to leaders, ensuring that appreciation is timely, equitable, and free from unconscious biases. ... Burnout remains a critical challenge in today’s workplace, especially as workloads intensify and hybrid models blur work-life boundaries. With 84% of recognised employees being less likely to experience burnout, AI-driven recognition programs offer a proactive approach to employee well-being. Candy pointed out that AI can monitor engagement levels, detect early signs of burnout, and prompt managers to step in with meaningful appreciation. By tracking sentiment analysis, workload patterns, and feedback trends, AI helps HR teams intervene before burnout escalates. “Recognition isn’t just about celebrating big milestones; it’s about appreciating daily efforts that often go unnoticed. AI helps ensure no contribution is left behind, reinforcing a culture of continuous encouragement and support,” remarked Candy Fernandez. Arti Dua expanded on this, explaining that AI can help create customised recognition strategies that align with employees’ stress levels and work patterns, ensuring appreciation is both timely and impactful.


11 surefire ways to fail with AI

“The fastest way to doom an AI initiative? Treat it as a tech project instead of a business transformation,” Pallath says. “AI doesn’t function in isolation — it thrives on human insight, trust, and collaboration.” The assumption that just providing tools will automatically draw users is a costly myth, Pallath says. “It has led to countless failed implementations where AI solutions sit unused, misaligned with actual workflows, or met with skepticism,” he says. ... Without a workforce that embraces AI, “achieving real business impact is challenging,” says Sreekanth Menon, global leader of AI/ML at professional services and solutions firm Genpact. “This necessitates leadership prioritizing a digital-first culture and actively supporting employees through the transition.” To ease employee concerns about AI, leaders should offer comprehensive AI training across departments, Menon says. ... AI isn’t a one-time deployment. “It’s a living system that demands constant monitoring, adaptation, and optimization,” Searce’s Pallath says. “Yet, many organizations treat AI as a plug-and-play tool, only to watch it become obsolete. Without dedicated teams to maintain and refine models, AI quickly loses relevance, accuracy, and business impact.” Market shifts, evolving customer behaviors, and regulatory changes can turn a once-powerful AI tool into a liability, Pallath says.


Now Is the Time to Transform DevOps Security

Traditionally, security was often treated as an afterthought in the software development process, typically placed at the end of the development cycle. This approach worked when development timelines were longer, allowing enough time to tackle security issues. As development speeds have increased, however, this final security phase has become less feasible. Vulnerabilities that arise late in the process now require urgent attention, often resulting in costly and time-intensive fixes. Overlooking security in DevOps can lead to data breaches, reputational damage, and financial loss. Delays increase the likelihood of vulnerabilities being exploited. As a result, companies are rethinking how security should be embedded into their development processes. ... Significant challenges are associated with implementing robust security practices within DevOps workflows. Development teams often resist security automation because they worry it will slow delivery timelines. Meanwhile, security teams get frustrated when developers bypass essential checks in the name of speed. Overcoming these challenges requires more than just new tools and processes. It's critical for organizations to foster genuine collaboration between development and security teams by creating shared goals and metrics. 


AI development pipeline attacks expand CISOs’ software supply chain risk

Malicious software supply chain campaigns are targeting development infrastructure and code used by developers of AI and large language model (LLM) machine learning applications, the study also found. ... Modern software supply chains rely heavily on open-source, third-party, and AI-generated code, introducing risks beyond the control of software development teams. Better controls over the software the industry builds and deploys are required, according to ReversingLabs. “Traditional AppSec tools miss threats like malware injection, dependency tampering, and cryptographic flaws,” said ReversingLabs’ chief trust officer SaÅ¡a Zdjelar. “True security requires deep software analysis, automated risk assessment, and continuous verification across the entire development lifecycle.” ... “Staying on top of vulnerable and malicious third-party code requires a comprehensive toolchain, including software composition analysis (SCA) to identify known vulnerabilities in third-party software components, container scanning to identify vulnerabilities in third-party packages within containers, and malicious package threat intelligence that flags compromised components,” Meyer said.


Data Governance as an Enabler — How BNY Builds Relationships and Upholds Trust in the AI Era

Governance is like bureaucracy. A lot of us grew up seeing it as something we don’t naturally gravitate toward. It’s not something we want more of. But we take a different view, governance is enabling. I’m responsible for data governance at Bank of New York. We operate in a hundred jurisdictions, with regulators and customers around the world. Our most vital equation is the trust we build with the world around us, and governance is what ensures we uphold that trust. Relationships are our top priority. What does that mean in practice? It means understanding what data can be used for, whose data it is, where it should reside, and when it needs to be obfuscated. It means ensuring data security. What happens to data at rest? What about data in motion? How are entitlements managed? It’s about defining a single source of truth, maintaining data quality, and managing data incidents. All of that is governance. ... Our approach follows a hub-and-spoke model. We have a strong central team managing enterprise assets, but we've also appointed divisional data officers in each line of business to oversee local data sets that drive their specific operations. These divisional data officers report to the enterprise data office. However, they also have the autonomy to support their business units in a decentralized manner.

Daily Tech Digest - March 16, 2025


Quote for the day:

"Absolute identity with one's cause is the first and great condition of successful leadership." -- Woodrow Wilson


What Do You Get When You Hire a Ransomware Negotiator?

Despite calls from law enforcement agencies and some lawmakers urging victims not to make any ransom payment, the demand for experienced ransomware negotiators remains high. The negotiators say they provide a valuable service, even if the victim has no intention to pay. They bring skills into an incident that aren't usually found in the executive suite - strategies for dealing with criminals. ... Negotiation is more a thinking game, in which you try to outsmart the hackers to buy time and ascertain valuable insight, said Richard Bird, a ransomware negotiator who draws much of his skills from his past stint as a law enforcement crises aversion expert - talking people out of attempting suicide or negotiating with kidnappers for the release of hostages. "The biggest difference is that when you are doing a face-to-face negotiation, you can pick-up lots of information from a person on their non-verbal communications such as eye gestures, body movements, but when you are talking to someone over email or messaging apps that can cause some issues - because you have got to work out how the person might perceive," Bird said. One advantage of online negotiation is that it gives the negotiator time to reflect on what to tell the hackers. 


Managing Data Security and Privacy Risks in Enterprise AI

While enterprise AI presents opportunities to achieve business goals in a way not previously conceived, one should also understand and mitigate potential risks associated with its development and use. Even AI tools designed with the most robust security protocols may still present a multitude of risks. These risks include intellectual property theft, privacy concerns when training data and/or output data may contain personally identifiable information (PII) or protected health information (PHI), and security vulnerabilities stemming from data breaches and data tampering. ... Privacy and data security in the context of AI are interdependent disciplines that often require simultaneous consideration and action. To begin with, advanced enterprise AI tools are trained on prodigious amounts of data processed using algorithms that should be—but are not always—designed to comply with privacy and security laws and regulations. ... Emerging laws and regulations related to AI are thematically consistent in their emphasis on accountability, fairness, transparency, accuracy, privacy, and security. These principles can serve as guideposts when developing AI governance action plans that can make your organization more resilient as advances in AI technology continue to outpace the law.


Mastering Prompt Engineering with Functional Testing: A Systematic Guide to Reliable LLM

OutputsCreating efficient prompts for large language models often starts as a simple task… but it doesn’t always stay that way. Initially, following basic best practices seems sufficient: adopt the persona of a specialist, write clear instructions, require a specific response format, and include a few relevant examples. But as requirements multiply, contradictions emerge, and even minor modifications can introduce unexpected failures. What was working perfectly in one prompt version suddenly breaks in another. ... What might seem like a minor modification can unexpectedly impact other aspects of a prompt. This is not only true when adding a new rule but also when adding more detail to an existing rule, like changing the order of the set of instructions or even simply rewording it. These minor modifications can unintentionally change the way the model interprets and prioritizes the set of instructions. The more details you add to a prompt, the greater the risk of unintended side effects. By trying to give too many details to every aspect of your task, you increase as well the risk of getting unexpected or deformed results. It is, therefore, essential to find the right balance between clarity and a high level of specification to maximise the relevance and consistency of the response.


You need to prepare for post-quantum cryptography now. Here’s why

"In some respects, we're already too late," said Russ Housley, founder of Vigil Security LLC, in a panel discussion at the conference. Housley and other speakers at the conference brought up the lesson from the SHA-1 to SHA-2 hashing-algorithm transition, which began in 2005 and was supposed to take five years but took about 12 to complete — "and that was a fairly simple transition," Housley noted. In a different panel discussion, InfoSec Global Vice President of Cryptographic Research & Development Vladimir Soukharev called the upcoming move to post-quantum cryptography a "much more complicated transition than we've ever seen in cryptographic history." ... The asymmetric algorithms that NIST is phasing out are thought to be vulnerable to this. The new ones that NIST is introducing use even more complicated math that quantum computers probably can't crack (yet). Today, an attacker could watch you log into Amazon and capture the asymmetrically-encrypted exchange of the symmetric key that secures your shopping session. But that would be pointless because the attacker couldn't decrypt that key exchange. In five or 10 years, it'll be a different story. The attacker will be able to decrypt the key exchange and then use that stolen key to reveal your shopping session


Network Forensics: A Short Guide to Digital Evidence Recovery from Computer Networks

At a technical level, this discipline operates across multiple layers of the OSI model. At the lower layers, it examines MAC addresses, VLAN tags, and frame metadata, while at the network and transport layers, it analyses IP addresses, routing information, port usage, and TCP/UDP session characteristics. ... Network communications contain rich metadata in their headers—the “envelope” information surrounding actual content. This includes IP headers with source/destination addresses, fragmentation flags, and TTL values; TCP/UDP headers containing port numbers, sequence numbers, window sizes, and flags; and application protocol headers with HTTP methods, DNS query types, and SMTP commands. This metadata remains valuable even when content is encrypted, revealing communication patterns, timing relationships, and protocol behaviors. ... Encryption presents perhaps the most significant technical challenge for modern network forensics, with over 95% of web traffic now encrypted using TLS. Despite encryption, substantial metadata remains visible, including connection details, TLS handshake parameters, certificate information, and packet sizing and timing patterns. This observable data still provides significant forensic value when properly analyzed.


Modernising Enterprise Architecture: Bridging Legacy Systems with Jargon

The growing gap between enterprise-wide architecture and the actual work being done on the ground leads to manual processes, poor integration, and limits how effectively teams can work across modern DevOps environments — ultimately creating the next generation of rigid, hard-to-maintain systems — repeating the mistakes of the past. ... Instead of treating enterprise architecture as a walled-off function, Jargon enables continuous integration between high-level architecture and real-world software design — bridging the gap between enterprise-wide planning and hands-on development while automating validation and collaboration. ... Jargon is already working with organisations to bridge the gap between modern API-first design and legacy enterprise tooling, enabling teams to modernise workflows without abandoning existing systems. While our support for OpenAPI and JSON Schema is already in place, we’re planning to add XMI support to bring Jargon’s benefits to a wider audience of enterprises who use legacy architecture tools. By supporting XMI, Jargon will allow enterprises to unlock their existing architecture investments while seamlessly integrating API-driven workflows. This helps address the challenge of top-down governance conflicting with bottom-up development needs, enabling smoother collaboration across teams.


CAIOs are stepping out from the CIO’s shadow

The CAIO position as such is still finding its prime location in the org chart, Fernández says, often assuming a position of medium-high responsibility in reporting to the CDO and thus, in turn, to the CIO. “These positions that are being created are very ‘business partner’ style,” he says, “to make these types of products understood, what needs they have, and to carry them out.” Casado adds: “For me, the CIO does not have such a ‘business case’ component — of impact on the profit and loss account. The role of artificial intelligence is very closely tied to generating efficiencies on an ongoing basis,” as well as implying “continuous adoption.” “It is essential that there is this adoption and that implies being very close to the people,” he says. ... Garnacho agrees, stating that, in less mature AI development environments, the CIO can assume CAIO functions. “But as the complexity and scope of AI grows, the specialization of the CAIO makes the difference,” he says. This is because “although the CIO plays a fundamental role in technological infrastructure and data management, AI and its challenges require specific leadership. In our view, the CIO lays the technological foundations, but it is the CAIO who drives the vision.” In this emerging division of functions, other positions may be impacted by the emergence of the AI chief.


Forget About Cloud Computing. On-Premises Is All the Rage Again

Cloud costs have a tendency to balloon over time: Storage costs per GB of data might seem low, but when you’re dealing with terabytes of data—which even we as a three-person startup are already doing—costs add up very quickly. Add to this retrieval and egress fees, and you’re faced with a bill you cannot unsee. Steep retrieval and egress fees only serve one thing: Cloud providers want to incentivize you to keep as much data as possible on the platform, so they can make money off every operation. If you download data from the cloud, it will cost you inordinate amounts of money. Variable costs based on CPU and GPU usage often spike during high-performance workloads. A report by CNCF found that almost half of Kubernetes adopters found that they’d exceeded their budget as a result. Kubernetes is an open-source container orchestration software that is often used for cloud deployments. The pay-per-use model of the cloud has its advantages, but billing becomes unpredictable as a result. Costs can then explode during usage spikes. Cloud add-ons for security, monitoring, and data analytics also come at a premium, which often increases costs further. As a result, many IT leaders have started migrating back to on-premises servers. A 2023 survey by Uptime found that 33% of respondents had repatriated at least some production applications in the past year.


IT leaders are driving a new cloud computing era

CIOs have become increasingly frustrated with vendor pricing models that lock them into unpredictable and often unfavorable long-term commitments. Many find that mounting operational costs frequently outweigh the promised savings from cloud computing. It’s no wonder that leadership teams are beginning to shift gears, discussing alternative solutions that might better serve their best interests. ... Regional or sovereign clouds offer significant advantages, including compliance with local data regulations that ensure data sovereignty while meeting industry standards. They reduce latency by placing data centers nearer to users, enhancing service performance. Security is also bolstered, as these clouds can apply customized protection measures against specific threats. Additionally, regional clouds provide customized services that cater to local needs and industries and offer more responsive customer support than larger global providers. ... The pushback against traditional cloud providers is not driven only by unexpected costs; it also reflects enterprise demand for greater autonomy, flexibility, and a skillfully managed approach to technology infrastructure. Effectively navigating the complexities of cloud computing will require organizations to reassess their dependencies and stay vigilant in seeking solutions that align with their growth strategies.


How Intelligent Continuous Security Enables True End-to-End Security

Intelligent Continuous Security (TM) (ICS) is the next evolution — harnessing AI-driven automation, real-time threat detection and continuous compliance enforcement to eliminate these inefficiencies. ICS extends beyond DevSecOps to also close security gaps with SecOps, ensuring end-to-end continuous security across the entire software lifecycle. This article explores how ICS enables true DevOps transformation by addressing the shortcomings of traditional security, reducing friction across teams, and accelerating secure software delivery. ... As indicated in the article The Next Generation of Security “The Future of Security is Continuous. Security isn’t a destination — it’s a continuous process of learning, adapting and evolving. As threats become smarter, faster, and more unpredictable, security must follow suit.” Traditional security practices were designed for a slower, waterfall-style development process. ... Intelligent Continuous Security (ICS) builds on DevSecOps principles but goes further by embedding AI-driven security automation throughout the SDLC. ICS creates a seamless security layer that integrates with DevOps pipelines, reducing the friction that has long plagued DevSecOps initiatives. ... ICS shifts security testing left by embedding automated security checks at every stage of development. 

Daily Tech Digest - March 15, 2025


Quote for the day:

"The most powerful leadership tool you have is your own personal example." -- John Wooden


Guardians of AIoT: Protecting Smart Devices from Data Poisoning

Machine learning algorithms rely on datasets to identify and predict patterns. The quality and completeness of this data determines the performance of the model is determined by the quality and completeness of this data. Data poisoning attacks tamper the knowledge of the AI by introducing false or misleading information and usually following these steps: The attacker manipulates the data by gaining access to the training dataset and injects malicious samples; The AI is now getting trained on the poisoned data and incorporates these corrupt patterns into its decision-making process; Once the poisoned data is deployed, the attackers now exploit it to bypass a security system or tamper critical tasks. ... The addition of AI into IoT ecosystems has intensified the potential attack surface. Traditional IoT devices were limited in functionality, but AIoT systems rely on data-driven intelligence, which makes them more vulnerable to such attacks and hence, challenge the security of the devices: AIoT devices collect data from different sources which increases the likelihood of data being tampered; The poisoned data can have catastrophic effects on the real-time decision making; Many IoT devices possess limited computational power to implement strong security measures which makes them easy targets for these attacks.


Preparing for The Future of Work with Digital Humans

For businesses to prepare their staff for the workplace of tomorrow, they need to embrace the technologies of tomorrow—namely, digital humans. These advanced solutions will empower L&D leaders to drive immersive learning experiences for their staff. Digital humans use various technologies and techniques like conversational AI, large language models (LLMs), retrieval augmented generation, digital human avatars, virtual reality (VR,) and generative AI to produce engaging and interactive scenarios that are perfect for training. Recall that a major issue with current training methods is that staff never have opportunities to apply the information they just consumed, resulting in the loss of said information. Digital humans avoid this problem by generating lifelike roleplay scenarios where trainees can actually apply and practice what they have learned, reinforcing knowledge retention. In a sales training example, the digital human takes on the role of a customer, allowing the employee to practice their pitch for a new product or service. The employee can rehearse in realistic conditions rather than studying the details of the new product or service and jumping on a call with a live customer. A detractor might push back and say that digital humans lack a necessary human element.


3 ways test impact analysis optimizes testing in Agile sprints

Code modifications or application changes inherently present risks by potentially introducing new bugs. Not thoroughly validating these changes through testing and review processes can lead to unintended consequences—destabilizing the system and compromising its functionality and reliability. However, validating code changes can be challenging, as it requires developers and testers to either rerun their entire test suites every time changes occur or to manually identify which test cases are impacted by code modifications, which is time-consuming and not optimal in Agile sprints. ... Test impact analysis automates the change analysis process, providing teams with the information they need to focus their testing efforts and resources on validating application changes for each set of code commits versus retesting the entire application each time changes occur. ... In UI and end-to-end verifications, test impact analysis offers significant benefits by addressing the challenge of slow test execution and minimizing the wait time for regression testing after application changes. UI and end-to-end testing are resource-intensive because they simulate comprehensive user interactions across various components, requiring significant computational power and time. 


No one knows what the hell an AI agent is

Well, agents — like AI — are a nebulous thing, and they’re constantly evolving. OpenAI, Google, and Perplexity have just started shipping what they consider to be their first agents — OpenAI’s Operator, Google’s Project Mariner, and Perplexity’s shopping agent — and their capabilities are all over the map. Rich Villars, GVP of worldwide research at IDC, noted that tech companies “have a long history” of not rigidly adhering to technical definitions. “They care more about what they are trying to accomplish” on a technical level, Villars told TechCrunch, “especially in fast-evolving markets.” But marketing is also to blame in large part, according to Andrew Ng, the founder of AI learning platform DeepLearning.ai. “The concepts of AI ‘agents’ and ‘agentic’ workflows used to have a technical meaning,” Ng said in a recent interview, “but about a year ago, marketers and a few big companies got a hold of them.” The lack of a unified definition for agents is both an opportunity and a challenge, Jim Rowan, head of AI for Deloitte, says. On the one hand, the ambiguity allows for flexibility, letting companies customize agents to their needs. On the other, it may — and arguably already has — lead to “misaligned expectations” and difficulties in measuring the value and ROI from agentic projects. “Without a standardized definition, at least within an organization, it becomes challenging to benchmark performance and ensure consistent outcomes,” Rowan said. 


Researchers astonished by tool’s apparent success at revealing AI’s hidden motives

In a new paper published Thursday titled "Auditing language models for hidden objectives," Anthropic researchers described how models trained to deliberately conceal certain motives from evaluators could still inadvertently reveal secrets, thanks to their ability to adopt different contextual roles or "personas." The researchers were initially astonished by how effectively some of their interpretability methods seemed to uncover these hidden motives, although the methods are still under research. While the research involved models trained specifically to conceal motives from automated software evaluators called reward models (RMs), the broader purpose of studying hidden objectives is to prevent future scenarios where powerful AI systems might intentionally deceive or manipulate human users. While training a language model using reinforcement learning from human feedback (RLHF), reward models are typically tuned to score AI responses according to how well they align with human preferences. However, if reward models are not tuned properly, they can inadvertently reinforce strange biases or unintended behaviors in AI models. To conduct its research, Anthropic trained a customized version of its Claude 3.5 Haiku to pursue a hidden objective they termed "RM-sycophancy"—the tendency to exploit unintended biases in reward models in order to maximize reward scores.


Strategies for Success in the Age of Intelligent Automation

Firstly, the integration of AI into existing organizational frameworks calls for a largely collaborative environment. It is imperative for employees to perceive AI not as a usurper of employment, but instead as an ally in achieving collective organizational goals. Cultivating a culture of collaboration between AI systems and human workers is essential to the successful deployment of intelligent automation. Organizations should focus on fostering open communication channels, ensuring that employees understand how AI can enhance their roles and contribute to the organization’s success. To achieve this, leadership must actively engage with employees, addressing concerns and highlighting the benefits of AI integration. ... The ethical ramifications of AI workforce deployment demand meticulous scrutiny. Transparency, accountability, and fairness are integral and their importance can’t be overstated. It’s vital that AI-driven decisions are aligned with ethical standards. Organizations are responsible for establishing robust ethical frameworks that govern AI interactions, mitigating potential biases and ensuring equitable outcomes. The best way to do this requires implementing standards for monitoring AI systems, ensuring they operate within defined ethical boundaries.


AI & Innovation: The Good, the Useless – and the Ugly

First things first: there is good innovation, the kind that genuinely benefits society. AI that enhances energy efficiency in manufacturing, aids scientific discoveries, improves extreme weather prediction, and optimizes resource use in companies falls into this category. Governments can foster those innovations through targeted R&D support, incentives for firms to develop and deploy AI, “buy European tech” procurement policies, and investments in robust digital infrastructure. The Competitiveness Compass outlines similar strategies. That said, given how many different technologies are lumped together in the AI category—everything from facial recognition technology to smart ad tech, ChatGPT, and advanced robotics—it makes little sense to talk about good innovation and “AI and productivity” in the abstract. Most hype these days is about generative AI systems that mimic human creative abilities with striking aptitude. Yet, how transformative will an improved ChatGPT be for businesses? It might streamline some organizational processes, expedite data processing, and automate routine content generation. For some industries, like insurance companies, such capabilities may be revolutionary. For many others, its innovation footprint will be much more modest. 


Revolution at the Edge: How Edge Computing is Powering Faster Data Processing

Due to its unparalleled advantages, edge computing is rapidly becoming the primary supporting technology of industries where speed, reliability, or efficiency aren’t just useful but imperative. Just like edge computing helps industries remain functional and up to date, staying informed with the latest sports news is important for every fan. Follow Facebook MelBet and receive real-time alerts, insider information, and a touch of comedy through memes and behind-the-scenes videos all in one place. Subscribe and get even closer to the world of sport! Edge computing relies on IoT as its most crucial component since there are billions of connected devices producing an immense and constant amount of data that needs to be processed right away. IoT devices in the residential sector, such as smart sensors in homes or Nest smart thermostats, as well as peripherals used for industrial automation in factories, all use edge computing. ... The way edge computing will function in the future is very exciting. With 5G, AI, and IoT, edge technologies are likely to become smarter, more widespread, and faster. Imagine a world where factories optimize themselves, smart traffic systems talk to autonomous vehicles, and healthcare devices stop illnesses from happening before they start.


Harnessing the data storm: three top trends shaping unstructured data storage and AI

The sheer volume of unstructured information generated by enterprises necessitates a new approach to storage. Object storage offers a better, more cost-effective method for handling significant datasets compared to traditional file-based systems. Unlike traditional storage methods, object storage treats each data item as a distinct object with its metadata. This approach offers both scalability and flexibility; ideal for managing the vast quantities of images, videos, sensor data, and other unstructured content generated by modern enterprises. ... Data lakes, the centralized repositories for both structured and unstructured data, are becoming increasingly sophisticated with the integration of AI and machine learning. These enable organizations to delve deeper into their data, uncovering hidden patterns and generating actionable insights without requiring complex and costly data preparation processes. ... The explosion of unstructured data presents both immense opportunities and challenges for organizations in every market across the globe. To thrive in this data-driven era, businesses must embrace innovative approaches to data storage, management, and analysis that are both cost-effective and compliant with evolving regulations. 


Open Source Tools Seen as Vital for AI in Hybrid Cloud Environments

The landscape of enterprise open source solutions is evolving rapidly, driven by the need for flexibility, scalability, and innovation. Enterprises are increasingly relying on open source technologies to drive digital transformation, accelerate software development, and foster collaboration across ecosystems. With advancements in cloud computing, AI, and containerization, open source solutions are shaping the future of IT by providing adaptable and secure platforms that meet evolving business needs. The active and diverse community support ensures continuous improvement, making open source a cornerstone of modern enterprise technology strategies. Red Hat's portfolio, including Red Hat Enterprise Linux, Red Hat OpenShift, Red Hat AI and Red Hat Ansible Automation Platform, provides robust platforms that support diverse workloads across hybrid and multi-cloud environments. Additionally, Red Hat's extensive partner ecosystem provides more seamless integration and support for a wide range of technologies and applications. Our commitment to open source principles and continuous innovation allows us to deliver solutions that are secure, scalable, and tailored to the needs of our customers. Open source has proven to be trusted and secure at the forefront of innovation


Daily Tech Digest - March 14, 2025


Quote for the day:

“Success does not consist in never making mistakes but in never making the same one a second time.” --George Bernard Shaw


The Maturing State of Infrastructure as Code in 2025

The progression from cloud-specific frameworks to declarative, multicloud solutions like Terraform represented the increasing sophistication of IaC capabilities. This shift enabled organizations to manage complex environments with never-before-seen efficiency. The emergence of programming language-based IaC tools like Pulumi then further blurred the lines between application development and infrastructure management, empowering developers to take a more active role in ops. ... For DevOps and platform engineering leaders, this evolution means preparing for a future where cloud infrastructure management becomes increasingly automated, intelligent and integrated with other aspects of the software development life cycle. It also highlights the importance of fostering a culture of continuous learning and adaptation, as the IaC landscape continues to evolve at a rapid pace. ... Firefly’s “State of Infrastructure as Code (IaC)” report is an annual pulse check on the rapidly evolving state of IaC adoption, maturity and impact. Over the course of the past few editions, this report has become an increasingly crucial resource for DevOps professionals, platform engineers and site reliability engineers (SREs) navigating the complexities of multicloud environments and a changing IaC tooling landscape.


Consent Managers under the Digital Personal Data Protection Act: A Game Changer or Compliance Burden?

The use of Consent Managers provides advantages for both Data Fiduciaries and Data Principals. For Data Fiduciaries, Consent Managers simplify compliance with consent-related legal requirements, making it easier to manage and document user consent in line with regulatory obligations. For Data Principals, Consent Managers offer a streamlined and efficient way to grant, modify, and revoke consent, empowering them with greater control over how their personal data is shared. This enhanced efficiency in managing consent also leads to faster, more secure, and smoother data flows, reducing the complexities and risks associated with data exchanges. Additionally, Consent Managers play a crucial role in helping Data Principals exercise their right to grievance redressal. ... Currently, Data Fiduciaries can manage user consent independently, making the role of Consent Managers optional. If this remains voluntary, many companies may avoid them, reducing their effectiveness. For Consent Managers to succeed, they need regulatory support, flexible compliance measures, and a business model that balances privacy protection with industry participation. ... Rooted in the fundamental right to privacy under Article 21 of the Constitution of India, the DPDPA aims to establish a structured approach to data processing while preserving individual control over personal information.


The future of AI isn’t the model—it’s the system

Enterprise leaders are thinking differently about AI in 2025. Several founders here told me that unlike in 2023 and 2024, buyers are now focused squarely on ROI. They want systems that move beyond pilot projects and start delivering real efficiencies. Mensch says enterprises have developed “high expectations” for AI, and many now understand that the hard part of deploying it isn’t always the model itself—it’s everything around it: governance, observability, security. Mistral, he says, has gotten good at connecting these layers, along with systems that orchestrate data flows between different models and subsystems. Once enterprises grapple with the complexity of building full AI systems—not just using AI models—they start to see those promised efficiencies, Mensch says. But more importantly, C-suite leaders are beginning to recognize the transformative potential. Done right, AI systems can radically change how information moves through a company. “You’re making information sharing easier,” he says. Mistral encourages its customers to break down silos so data can flow across departments. One connected AI system might interface with HR, R&D, CRM, and financial tools. “The AI can quickly query other departments for information,” Mensch explains. “You no longer need to query the team.”


Generative AI is finally finding its sweet spot, says Databricks chief AI scientist

Beyond the techniques, knowing what apps to build is itself a journey and something of a fishing expedition. "I think the hardest part in AI is having confidence that this will work," said Frankle. "If you came to me and said, 'Here's a problem in the healthcare space, here are the documents I have, do you think AI can do this?' my answer would be, 'Let's find out.'" ... "Suppose that AI could automate some of the most boring legal tasks that exist?" offered Frankle, whose parents are lawyers. "If you wanted an AI to help you do legal research, and help you ideate about how to solve a problem, or help you find relevant materials -- phenomenal!" "We're still in very early days" of generative AI, "and so, kind of, we're benefiting from the strengths, but we're still learning how to mitigate the weaknesses." ... In the midst of uncertainty, Frankle is impressed with how customers have quickly traversed the learning curve. "Two or three years ago, there was a lot of explaining to customers what generative AI was," he noted. "Now, when I talk to customers, they're using vector databases." "These folks have a great intuition for where these things are succeeding and where they aren't," he said of Databricks customers. Given that no company has an unlimited budget, Frankle advised starting with an initial prototype, so that investment only proceeds to the extent that it's clear an AI app will provide value.


Australia’s privacy watchdog publishes regulatory strategy prioritizing biometrics

The strategy plan includes a table of activities and estimated timelines, a detailed breakdown of actions in specific categories, and a list of projected long- and short-term outcomes. The goals are ambitious in scope: a desired short-term outcome is to “mature existing awareness about privacy across multiple domains of life” so that “individuals will develop a more nuanced understanding of privacy issues recognising their significance across various aspects of their lives, including personal, professional, and social domains.” Laws, skills training and better security tools are one thing, but changing how people understand their privacy is a major social undertaking. The OAIC’s long-term outcomes seem more rooted in practicality; they include the widespread implementation of enhanced privacy compliance practices for organizations, better public understanding of the OAIC’s role as regulator, and enhanced data handling industry standards. ... AI is a matter of going concern, and compliance for model training and development will be a major focus for the regulator. In late February, Kind delivered a speech on privacy and security in retail that references her decision on the Bunnings case, which led to the publication of guidance on the use of facial recognition technology, focused on four key privacy concepts: necessity/proportionality, consent/transparency, accuracy/bias, and governance.


Hiring privacy experts is tough — here’s why

“Some organizations think, ‘Well, we’re funding security, and privacy is basically the same thing, right?’ And I think that’s really one of my big concerns,” she says. This blending of responsibilities is reflected in training practices, according to Kazi, who notes how many organizations combine security and privacy training, which isn’t inherently problematic, but it carries risks. “One of the questions we ask in our survey is, ‘Do you combine security training and privacy training?’ Some organizations say they do not necessarily see it as a bad thing, but you can … be doing security, but you’re not doing privacy. And so that’s what’s highly concerning is that you can’t have privacy without security, but you could potentially do security well without considering privacy.” As Trovato emphasizes, “cybersecurity people tend to be from Mars and privacy people from Venus”, yet he also observes how privacy and cybersecurity professionals are often grouped together, adding to the confusion about what skills are truly needed. ... “Privacy includes how are we using data, how are you collecting it, who are you sharing it with, how are you storing it — all of these are more subtle component pieces, and are you meeting the requirements of the customer, of the regulator, so it’s a much more outward business focus activity day-to-day versus we’ve got to secure everything and make sure it’s all protected.”


Security Maturity Models: Leveraging Executive Risk Appetite for Your Secure Development Evolution

With developers under pressure to produce more code than ever before, development teams need to have a high level of security maturity to avoid rework. That necessitates having highly skilled personnel working within a strategic, prevention-focused framework. Developer and AppSec teams must work closely together, as opposed to the old model of operating as separate entities. Today, developers need to assume a significant role in ensuring security best practices. The most recent BSIMM report from Black Duck Software, for instance, found that there are only 3.87 AppSec professionals for every 100 developers, which doesn’t bode well for AppSec teams trying to secure an organization’s software all on their own. A critical part of learning initiatives is the ability to gauge the progress of developers in the program, both to ensure that developers are qualified to work on the organization’s most sensitive projects and to assess the effectiveness of the program. This upskilling should be ongoing, and you should always look for areas that can be improved. Making use of a tool like SCW’s Trust Score, which uses benchmarks to gauge progress both internally and against industry standards, can help ensure that progress is being made.


Why thinking like a tech company is essential for your business’s survival

The phrase “every company is a tech company” gets thrown around a lot, but what does that actually mean? To us, it’s not just about using technology — it’s about thinking like a tech company. The most successful tech companies don’t just refine what they already do; they reinvent themselves in anticipation of what’s next. They place bets. They ask: Where do we need to be in five or 10 years? And then, they start moving in that direction while staying flexible enough to adapt as the market evolves. ... Risk management is part of our DNA, but AI presents new types of risks that businesses haven’t dealt with before. ... No matter how good our technology is, our success ultimately comes down to people. And we’ve learned that mindset matters more than skill set. When we launched an AI proof-of-concept project for our interns, we didn’t recruit based on technical acumen. Instead, we looked for curious, self-starting individuals willing to experiment and learn. What we found was eye-opening—these interns thrived despite having little prior experience with AI. Why? Because they asked great questions, adapted quickly, and weren’t afraid to explore. ... Aligning your culture, processes and technology strategy ensures you can adapt to a rapidly changing landscape while staying true to your core purpose.


Realizing the Internet of Everything

The obvious answer to this problem is governance, a set of rules that constrain use and technology to enforce them. The problem, as it is so often with the “obvious,” is that setting the rules would be difficult and constraining use through technology would be difficult to do, and probably harder to get people to believe in. Think about Asimov’s Three Laws of Robotics and how many of his stories focused on how people worked to get around them. Two decades ago, a research lab did a video collaboration experiment that involved a small camera in offices so people could communicate remotely. Half the workforce covered their camera when they got in. I know people who routinely cover their webcams when they’re not on a scheduled video chat or meeting, and you probably do too. So what if the light isn’t on? Somebody has probably hacked in. Social concerns inevitably collide with attempts to integrate technology tightly with how we live. Have we reached a point where dealing with those concerns convincingly is essential in letting technology improve our work, our lives, further? We do have widespread, if not universal, video surveillance. On a walk this week, I found doorbell cameras or other cameras on about a quarter of the homes I passed, and I’d bet there are even more in commercial areas. 


Cloud Security Architecture: Your Guide to a Secure Infrastructure

Threat modeling can be a good starting point, but it shouldn't end with a stack-based security approach. Rather than focusing solely on the technologies, approach security by mapping parts of your infrastructure to equivalent security concepts. Here are some practical suggestions and areas to zoom in on for implementation. ... When protecting workloads in the cloud, consider using some variant of runtime security. Kubernetes users have no shortage of choice here with tools such as Falco, an open-source runtime security tool that monitors your applications and detects anomalous behaviors. However, chances are your cloud provider has some form of dynamic threat detection for your workloads. For example, AWS offers Amazon GuardDuty, which continuously monitors your workloads for malicious activity and unauthorized behavior. ... Implementing two-factor authentication adds an extra layer of protection by requiring a second form of verification, such as an authenticator app or a passkey, in addition to your password. While reaching for your authenticator app every time you log in might seem slightly inconvenient, it's a far better outcome than dealing with the aftermath of a breached account. The minor inconvenience is a small price to pay for the added security it provides.