Showing posts with label firmware. Show all posts
Showing posts with label firmware. Show all posts

Daily Tech Digest - March 08, 2026


Quote for the day:

"How was your day? If your answer was "fine," then I don't think you were leading" -- Seth Godin



Technical debt is the tax killing AI ambition

In this article, Rebecca Fox argues that while artificial intelligence offers game-changing productivity, most organizations remain fundamentally ill-prepared for its full-scale adoption due to legacy technical and data debt. She compares technical debt to financial debt, where deferred maintenance acts as high-interest payments that stifle agility and increase operational costs. The article emphasizes that AI functions as a high-speed spotlight, amplifying "garbage in, garbage out" scenarios; without robust data governance and simplified information architecture, AI initiatives inevitably plateau or produce confidently incorrect results. Furthermore, the tension between AI ambition and economic reality is heightened by CFOs who are increasingly wary of large-scale investments with uncertain returns. Fox contends that instead of seeking a "magic wand" solution, leaders must use the current excitement surrounding AI as a catalyst to finally address unglamorous foundational work. This involves simplifying core platforms, reducing integration sprawl, and prioritizing data quality across the business. Ultimately, AI cannot fix technical debt on its own, but it serves as a critical reason to resolve it, ensuring that organizations can scale effectively without being crushed by the compounding costs of their own legacy systems and fragmented data estates.


Why Executive Presence Is A Hard Asset (Not A Soft Skill)

The article argues that executive presence is a tangible, measurable business driver rather than an abstract personality trait. By linking trust directly to revenue performance and organizational stability, the author highlights how leaders serve as the primary conduits for corporate credibility. In an era increasingly dominated by AI-driven skepticism and the complexities of hybrid work, authentic presence provides essential reassurance to stakeholders. The piece emphasizes that executive presence functions as a shorthand for judgment, influencing how investors, employees, and customers evaluate a leader's ability to deliver results. It identifies specific components of this asset, including vocal delivery, media training, and disciplined messaging, noting that perception is heavily influenced by nonverbal cues like tone and pitch. Furthermore, the article suggests that a comprehensive public relations strategy is necessary to sustain this presence over time. Ultimately, investing in executive presence is presented as a strategic move that creates durable value, strengthens leadership effectiveness, and offers a steadying force during periods of uncertainty. Rather than being a "soft" addition, it is a critical hard asset that determines long-term success and reputational resilience in a competitive landscape.


NIST Urged to Go Deep in OT Security Guidance

The National Institute of Standards and Technology (NIST) is currently updating its foundational operational technology (OT) security guidance, Special Publication 800-82, for its fourth iteration. In response to NIST’s call for input, cybersecurity experts and major vendors like Claroty, Armis, and Dragos are advocating for more granular, actionable advice that reflects the maturing nature of the field. These specialists emphasize that traditional IT security practices are often inadequate or even hazardous when applied to sensitive industrial environments. Key recommendations include moving beyond binary "scan or don’t scan" dilemmas by establishing passive assessment baselines and adopting risk-based frameworks for controlled active scanning. Furthermore, there is a strong push for NIST to harmonize its guidelines with global technical standards, such as ISA/IEC 62443, to reduce regulatory burdens on operators. Experts also suggest shifting static appendices into dynamic, machine-readable web resources to better address evolving threats. By focusing on asset criticality and multidimensional vulnerability scoring rather than just static CVSS data, the updated guidance could provide the technical depth necessary for modern industrial automation. Ultimately, the goal is to provide clear, specific instructions that leave less room for ambiguity in securing critical infrastructure.


Signals Show Heightened Stress on Workplace Cultures

The NAVEX 2025 Whistleblowing and Incident Management Benchmark Report, as detailed on JD Supra, highlights a significant rise in workplace culture stressors, particularly regarding workplace civility. This category, which includes disrespectful behaviors that do not necessarily meet legal definitions of harassment, now accounts for nearly 18% of global reports. The data reveals a notable regional divergence; while North America saw a slight decrease, reports increased across Europe, APAC, and South America, signaling maturing reporting cultures that now treat "soft" cultural issues as formal compliance matters. Furthermore, workplace conduct issues dominate over half of all global reports, serving as a critical early warning system for broader ethical failures. The report also notes a concerning uptick in retaliation fears and imminent threat reports, the latter of which boasts a 90% substantiation rate. These trends suggest that unresolved interpersonal tensions can escalate into serious safety risks and compliance breaches. To mitigate these risks in 2026, organizations are urged to elevate workplace civility to a strategic priority, strengthen anti-retaliation protections, and improve investigation transparency. Ultimately, the findings underscore that psychological safety is foundational to effective whistleblowing systems and overall organizational resilience in an increasingly volatile global landscape.


Backup strategies are working, and ransomware gangs are responding with data theft

According to the 2026 Cyber Claims Report from Coalition, business email compromise (BEC) and funds transfer fraud (FTF) dominated the cyber insurance landscape in 2025, accounting for 58% of all claims. While BEC frequency rose by 15%, faster detection helped reduce the average loss per incident. Conversely, ransomware frequency remained flat, but initial demands surged by 47% to exceed $1 million on average. This shift highlights a strategic change among attackers: as organizations improve their backup strategies, ransomware gangs are increasingly pivoting toward dual extortion, which involves both data encryption and theft. In fact, 70% of ransomware claims now involve this dual-threat tactic. The report identifies Akira as the most frequent ransomware variant, while RansomHub carried the highest average demand at over $2.3 million. Despite these aggressive tactics, 86% of victims refused to pay, and those who did often utilized professional negotiators to reduce costs by an average of 65%. Technically, VPNs emerged as the most targeted technology, appearing in 59% of ransomware incidents. Security experts emphasize that organizations must prioritize data minimization and hardened, immutable backups to combat these evolving threats effectively while securing public-facing login panels and critical infrastructure. These findings highlight the urgent need for robust defenses.


Only 30 minutes per quarter on cyber risk: Why CISO-board conversations are falling short

The article "Only 30 minutes per quarter on cyber risk: Why CISO-board conversations are falling short" explores a widening communication gap between Chief Information Security Officers (CISOs) and corporate boards. Despite the escalating threat of AI-driven cyberattacks, research from IANS and Artico Search indicates that three-quarters of security leaders are limited to just 30 minutes per quarter for board presentations. These interactions are frequently superficial, prioritizing status metrics over strategic risk discussions or emerging threats. Consequently, only 30% of boards describe their relationship with CISOs as strong and collaborative, while many others perceive these interactions as merely functional. The report further notes that boards often remain passive, with fewer than half participating in active exercises like tabletop simulations or crisis drills. To address this divide, the article suggests that CISOs must transition from technical specialists into business-minded leaders who can effectively contextualize cybersecurity within the broader landscape of organizational risk and ROI. By cultivating deeper engagement and offering predictive insights—particularly regarding disruptive technologies like AI—CISOs can evolve these brief updates into substantive strategic partnerships that enhance long-term organizational resilience in an increasingly volatile and complex global digital threat environment.


Ask the Experts: CIOs say they wouldn’t pull workloads back from the cloud

The InformationWeek article, "Ask the Experts: CIOs Say They Wouldn’t Pull Workloads Back from the Cloud," explores the phenomenon of cloud repatriation versus the steadfast commitment of leading IT executives to cloud environments. While data from Flexera suggests that roughly 21% of organizations are returning some workloads to on-premises infrastructure due to costs and security concerns, experts Josh Hamit and Sue Bergamo argue that the cloud remains the ultimate destination for modern innovation. Hamit, CIO of Altra Federal Credit Union, attributes his success to a deliberate, gradual migration strategy and the use of experienced partners, noting that the cloud provides unmatched scalability and essential tie-ins for artificial intelligence. Similarly, Bergamo, a veteran CIO and CISO, contends that with proper architectural configuration, the cloud offers security and performance levels that rival or exceed traditional data centers. She emphasizes that perceived drawbacks like latency and overage charges are typically results of poor planning rather than inherent flaws in the cloud model itself. Both leaders conclude that the agility, global reach, and innovative potential of cloud computing make it an indispensable asset, asserting they would not reverse their digital transformations if given the chance to start over today.


The cybersecurity blind spot in data center building systems

This article argues that the rapid expansion of data centers, fueled by the global AI revolution, has introduced a critical vulnerability in Operational Technology (OT). While digital security often focuses on data protection, the physical systems controlling power, cooling, and access are increasingly susceptible to remote exploitation. Modern facilities are marvels of automation, frequently managed via remote networks with minimal on-site staff, which inadvertently creates prime targets for sophisticated adversaries. Drawing parallels to historical breaches like the Stuxnet attack and the Ukrainian power grid incident, the piece warns that similar tactics could be used to manipulate environmental controls, causing power surges or overheating that could permanently damage sensitive GPUs. Furthermore, the integration of AI into facility management creates new entry points; if corrupted, the same algorithms intended to optimize performance could be weaponized to sabotage operations. The author contends that existing safeguards, such as periodic stress tests, are insufficient in this evolving threat landscape. Ultimately, investors and operators are urged to prioritize OT security through rigorous due diligence and proactive questioning to ensure that these essential infrastructure components do not remain a dangerous oversight in the rush to build.


Technical Debt Is Eating Your Firmware Alive: 3 Steps to Fight Back

In the article "Technical Debt Is Eating Your Firmware Alive: 3 Steps to Fight Back," Jacob Beningo explains how firmware technical debt accumulates when deadline pressures force developers to take shortcuts, resulting in tangled architectures and global variable "glue." Beningo identifies this as a leadership challenge, noting that organizations often prioritize immediate feature delivery over long-term code health. The symptoms of high debt include plummeting feature velocity, extended bug-fix times, and constant firefighting, leading to maintenance costs that are two to four times higher than clean codebases. To reverse this trend, Beningo outlines three practical steps for teams to implement immediately. First, make debt visible by measuring objective metrics like coupling and cyclomatic complexity. Second, institute lightweight, fifteen-minute code reviews focused on maintaining module boundaries rather than just finding bugs. Third, reclaim one specific architectural boundary at a time to prevent total paralysis. By enforcing even a single interface, teams can begin restoring order to their repository. Ultimately, Beningo argues that firmware must be treated as a valuable asset rather than a liability. Proactive management of technical debt ensures that long-lived embedded products remain maintainable and profitable without necessitating costly, high-risk rewrites later on.


Misconfigured Microsoft 365 leaves big firms exposed

According to recent research from CoreView, nearly half of large organizations experienced security or compliance incidents over the past year due to Microsoft 365 misconfigurations. The study, which surveyed 500 IT leaders and analyzed data from 1.6 million users, highlights that 82% of professionals consider managing the platform a severe operational burden, with many finding it nearly impossible to secure at scale. Significant visibility gaps persist, as 45% of organizations lack full control over their environments, while 90% struggle with basic security hygiene like enforcing password policies. Critical vulnerabilities are also evident in authentication practices; remarkably, 87% of organizations have administrators operating without multi-factor authentication. Furthermore, governance issues have led to failed or delayed audits for 43% of firms because of manual reporting processes. While 70% of IT leaders recognize the potential value of AI-driven administration, over half have already reversed AI-implemented changes due to governance fears. CoreView warns that deploying AI into these misconfigured environments without established guardrails only accelerates risk rather than solving underlying structural problems. Consequently, firms must prioritize strengthening their governance foundations and basic security controls before expanding automation across their increasingly complex Microsoft 365 ecosystems to prevent cascading data exposure.

Daily Tech Digest - January 14, 2026


Quote for the day:

"To accomplish great things, we must not only act, but also dream, not only plan, but also believe." -- Anatole France



Outsmarting Data Center Outage Risks in 2026

Even the most advanced and well-managed facilities are not immune to disruptions. Recent incidents, such as outages at AWS, Cloudflare, and Microsoft Azure, serve as reminders that no data center can guarantee 100% uptime. This highlights the critical importance of taking proactive steps to mitigate data center outage risks, regardless of how reliable your facility appears to be. ... Overheating events can cause servers to shut down, leading to outages. To prevent an outage, you must detect and address excess heat issues proactively, before they become severe enough to trigger failures. A key consideration in this regard is to monitor data center temperatures granularly – meaning that instead of just deploying sensors that track the overall temperature of the server room, you monitor the temperatures of individual racks and servers. This is important because heat can accumulate in small areas, even if it remains normal across the data center. ... But from the perspective of data center uptime, physical security, which protects against physical attacks, is arguably a more important consideration. Whereas cybersecurity attacks typically target only a handful of servers or workloads, physical attacks can easily disable an entire data center. To this end, it’s critical to invest in multi-layered physical security controls – from the data center perimeter through to locks on individual server cabinets – to protect against intrusion. ... To mitigate outage risks, data center operators must take proactive steps to prevent fires from starting in the first place. 


Deploying AI agents is not your typical software launch - 7 lessons from the trenches

Across the industry, there is agreement that agents require new considerations beyond what we've become accustomed to in traditional software development. In the process, new lessons are being learned. Industry leaders shared some of their own lessons with ZDNET as they moved forward into an agentic AI future. ... Kale urges AI agent proponents to "grant autonomy in proportion to reversibility, not model confidence. Irreversible actions across multiple domains should always have human oversight, regardless of how confident the system appears." Observability is also key, said Kale. "Being able to see how a decision was reached matters as much as the decision itself." ... "AI works well when it has quality data underneath," said Oleg Danyliuk, CEO at Duanex, a marketing agency that built an agent to automate the validation of leads of visitors to its site. "In our example, in order to understand if the lead is interesting for us, we need to get as much data as we can, and the most complex is to get the social network's data, as it is mostly not accessible to scrape. That's why we had to implement several workarounds and get only the public part of the data." ... "AI agents do not succeed on model capability alone," said Martin Bufi, a principal research director at Info-Tech Research Group. His team designed and developed AI agent systems for enterprise-level functions, including financial analysis, compliance validation, and document processing. What helped these projects succeed was the employment of "AgentOps" (agent operations), which focuses on managing the entire agent lifecycle.


What enterprises think about quantum computing

Quantum computers’ qubits are incredibly fragile, so even setting or reading qubits has to be incredibly precise or it messes everything up. Environmental conditions can also mess things up, because qubits can get entangled with the things around them. Qubits can even leak away in the middle of something. So, here we have a technology that most people don’t understand and that is incredibly finicky, and we’re supposed to bet the business on it? How many enterprises would? None, according to the 352 who commented on the topic. How many think their companies will use it eventually? All of them—but they don’t know where or when, as an old song goes. And by the way, quantum theory is older than that song, and we still don’t have a handle on it. ... First and foremost, this isn’t the technology for general business applications. The quantum geeks emphasize that good quantum applications are where you have some incredibly complex algorithm, some math problem, that is simply not solvable using digital computers. Some suggest that it’s best to think of a quantum computer as a kind of analog computer. ... Even where quantum computing can augment digital, you’ll have to watch ROI according to the second point. The cost of quantum computing is currently prohibitive for most applications, even the stuff it’s good for, so you need find applications that have massive benefits, or think of some “quantum as a service” for solving an occasional complex problem.


Beyond the hype: 4 critical misconceptions derailing enterprise AI adoption

Leaders frequently assume AI adoption is purely technological when it represents a fundamental transformation that requires comprehensive change management, governance redesign and cultural evolution. The readiness illusion obscures human and organizational barriers that determine success. ... Leaders frequently assume AI can address every business challenge and guarantee immediate ROI, when empirical evidence demonstrates that AI delivers measurable value only in targeted, well-defined and precise use cases. This expectation reality gap contributes to pilot paralysis, in which companies undertake numerous AI experiments but struggle to scale any to production. ... Executives frequently claim their enterprise data is already clean or assume that collecting more data will ensure AI success — fundamentally misunderstanding that quality, stewardship and relevance matter exponentially more than raw quantity — and misunderstanding that the definition of clean data changes when AI is introduced. ... AI systems are probabilistic and require continuous lifecycle management. MIT research demonstrates manufacturing firms adopting AI frequently experience J-curve trajectories, where initial productivity declines but is then followed by longer-term gains. This is because AI deployment triggers organizational disruption requiring adjustment periods. Companies failing to anticipate this pattern abandon initiatives prematurely. The fallacy manifests in inadequate deployment management, including planning for model monitoring, retraining, governance and adaptation.


Inside the Growing Problem of Identity Sprawl

For years, identity governance relied on a set of assumptions tied closely to human behavior. Employees joined organizations, moved roles and eventually left. Even when access reviews lagged or controls were imperfect, identities persisted long enough to be corrected. That model no longer reflects reality. The difference between human and machine identities isn't just scale. "With human identities, if people are coming into your organizations as employees, you onboard them. They work, and by the time they leave, you can deprovision them," said Haider Iqbal ... "Organizations are using AI today, whether they know it or not, and most organizations don't even know that it's deployed in their environment," said Morey Haber, chief security advisor at BeyondTrust. That lack of awareness is not limited to AI. Many security teams struggle to maintain a reliable inventory of non-human identities, especially when those identities are created dynamically by automation or cloud services. Visibility gaps don't stop access from being granted, but they do prevent teams from confidently enforcing policy. "Without integration … I don't know what it's doing, and then I got to go figure it out. When you unify together, then you have all the AI visibility," Haber said, describing the operational impact of fragmented tooling. ... Modern enterprise environments rely on elevated access for cloud orchestration, application integration and automated workflows. Service accounts and application programming interfaces often require broad permissions to function reliably.


The Timeless Architecture: Enterprise Integration Patterns That Exceed Technology Trends

A strange reality is often encountered by enterprise technology leaders: everything seems to change, yet many things remain the same. New technologies emerge — from COBOL to Java to Python, from mainframes to the cloud — but the fundamental problems persist. Organizations still need to connect incompatible systems, convert data between different formats, maintain reliability when components fail, and scale to meet increasing demand. ... Synchronous request-response communication creates tight coupling and can lead to cascading failures. Asynchronous messaging has appeared across all eras — on mainframes via MQ, in SOA through ESB platforms, in cloud environments via managed messaging services such as SQS and Service Bus, and in modern event-streaming platforms like Kafka. ... A key architectural question is how to coordinate complex processes that span multiple systems. Two primary approaches exist. Orchestration relies on a centralized coordinator to control the workflow, while choreography allows systems to react to events in a decentralized manner. Both approaches existed during the mainframe era and remain relevant in microservices architectures today. Each has advantages: orchestration provides control and visibility, while choreography offers resilience and loose coupling. ... Organizations that treat security as a mere technical afterthought often accumulate significant technical debt. In contrast, enterprises that embed security patterns as foundational architectural elements are better equipped to adapt as technologies evolve.


From distributed monolith to composable architecture on AWS: A modern approach to scalable software

A distributed monolith is a system composed of multiple services or components, deployed independently but tightly coupled through synchronous dependencies such as direct API calls or shared databases. Unlike a true microservices architecture, where services are autonomous and loosely coupled, distributed monoliths share many pitfalls of monoliths ... Composable architecture embraces modularity and loose coupling by treating every component as an independent building block. The focus lies in business alignment and agility rather than just code decomposition. ... Start by analyzing the existing application to find natural business or functional boundaries. Use Domain-Driven Design to define bounded contexts that encapsulate specific business capabilities. ... Refactor the code into separate repositories or modules, each representing a bounded context or microservice. This clear separation supports independent deployment pipelines and ownership. ... Replace direct code or database calls with API calls or events. For example: Use REST or GraphQL APIs via API Gateway. Emit business events via EventBridge or SNS for asynchronous processing. Use SQS for message queuing to handle transient workloads. ... Assign each microservice its own DynamoDB table or data store. Avoid cross-service database joins or queries. Adopt a single-table design in DynamoDB to optimize data retrieval patterns within each service boundary. This approach improves scalability and performance at the data layer.


Firmware scanning time, cost, and where teams run EMBA

Security teams that deal with connected devices often end up running long firmware scans overnight, checking progress in the morning, and trying to explain to colleagues why a single image consumed a workday of compute time. That routine sets the context for a new research paper that examines how the EMBA firmware analysis tool behaves when it runs in different environments. ... Firmware scans often stretch into many hours, especially for medium and large images. The researchers tracked scan durations down to the second and repeated runs to measure consistency. Repeated executions on the same platform produced nearly identical run times and findings. That behavior matters for teams that depend on repeatable results during testing, validation, or research work. It also supports the use of EMBA in environments where scans need to be rerun with the same settings over time. The data also shows that firmware size alone does not explain scan duration. Internal structure, compression, and embedded components influenced how long individual modules ran. Some smaller images triggered lengthy analysis steps, especially during deep inspection stages. ... Nuray said cloud based EMBA deployments fit well into large scale scanning activity. He described cloud execution as a practical option for parallel analysis across many firmware images. Local systems, he added, support detailed investigation where teams need tight control over execution conditions and repeatability. 


'Most Severe AI Vulnerability to Date' Hits ServiceNow

Authentication issues in ServiceNow potentially opened the door for arbitrary attackers to gain full control over the entire platform and access to the various systems connected to it. ... Costello's first major discovery was that ServiceNow shipped the same credential to every third-party service that authenticated to the Virtual Agent application programming interface (API). It was a simple, obvious string — "servicenowexternalagent" — and it allowed him to connect to ServiceNow as legitimate third-party chat apps do. To do anything of significance with the Virtual Agent, though, he had to impersonate a specific user. Costello's second discovery, then, was quite convenient. He found that as far as ServiceNow was concerned, all a user needed to prove their identity was their email address — no password, let alone multifactor authentication (MFA), was required. ... An attacker could use this information to create tickets and manage workflows, but the stakes are now higher, because ServiceNow decided to upgrade its virtual agent: it can now also engage the platform's shiny new "Now Assist" agentic AI technology. ... "It's not just a compromise of the platform and what's in the platform — there may be data from other systems being put onto that platform," he notes, adding, "If you're any reasonably-sized organization, you are absolutely going to have ServiceNow hooked up to all kinds of other systems. So with this exploit, you can also then ... pivot around to Salesforce, or jump to Microsoft, or wherever."


Cybercrime Inc.: When hackers are better organized than IT

Cybercrime has transformed from isolated incidents into an organized industry. The large groups operate according to the same principles as international corporations. They have departments, processes, management levels, and KPIs. They develop software, maintain customer databases, and evaluate their success rates. ... Cybercrime now functions like a service chain. Anyone planning an attack today can purchase all the necessary components — from initial access credentials to leak management. Access brokers sell access to corporate networks. Botnet operators provide computing power for attacks. Developers deliver turnkey exploits tailored to known vulnerabilities. Communication specialists handle contact with the victims. ... What makes cybercrime so dangerous today is not just the technology itself, but the efficiency of its use. Attackers are flexible, networked, and eager to experiment. They test, discard, and improve — in cycles that are almost unimaginable in a corporate setting. Recruitment is handled like in startups. Job offers for developers, social engineers, or language specialists circulate in darknet forums. There are performance bonuses, training, and career paths. The work methods are agile, communication is decentralized, and financial motivation is clearly defined. ... Given this development, absolute security is unattainable. The crucial factor is the ability to quickly regain operational capability after an attack. Cyber ​​resilience describes this competence — not only to survive crises but also to learn from them.

Daily Tech Digest - July 20, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


Lean Agents: The Agile Workforce of Agentic AI

Organizations are tired of gold‑plated mega systems that promise everything and deliver chaos. Enter frameworks like AutoGen and LangGraph, alongside protocols such as MCP; all enabling Lean Agents to be spun up on-demand, plug into APIs, execute a defined task, then quietly retire. This is a radical departure from heavyweight models that stay online indefinitely, consuming compute cycles, budget, and attention. ... Lean Agents are purpose-built AI workers; minimal in design, maximally efficient in function. Think of them as stateless or scoped-memory micro-agents: they wake when triggered, perform a discrete task like summarizing an RFP clause or flagging anomalies in payments and then gracefully exit, freeing resources and eliminating runtime drag. Lean Agents are to AI what Lambda functions are to code: ephemeral, single-purpose, and cloud-native. They may hold just enough context to operate reliably but otherwise avoid persistent state that bloats memory and complicates governance. ... From technology standpoint, combined with the emerging Model‑Context Protocol (MCP) give engineering teams the scaffolding to create discoverable, policy‑aware agent meshes. Lean Agents transform AI from a monolithic “brain in the cloud” into an elastic workforce that can be budgeted, secured, and reasoned about like any other microservice.


Cloud Repatriation Is Harder Than You Think

Repatriation is not simply a reverse lift-and-shift process. Workloads that have developed in the cloud often have specific architectural dependencies that are not present in on-premises environments. These dependencies can include managed services like identity providers, autoscaling groups, proprietary storage solutions, and serverless components. As a result, moving a workload back on-premises typically requires substantial refactoring and a thorough risk assessment. Untangling these complex layers is more than just a migration; it represents a structural transformation. If the service expectations are not met, repatriated applications may experience poor performance or even fail completely. ... You cannot migrate what you cannot see. Accurate workload planning relies on complete visibility, which includes not only documented assets but also shadow infrastructure, dynamic service relationships, and internal east-west traffic flows. Static tools such as CMDBs or Visio diagrams often fall out of date quickly and fail to capture real-time behavior. These gaps create blind spots during the repatriation process. Application dependency mapping addresses this issue by illustrating how systems truly interact at both the network and application layers. Without this mapping, teams risk disrupting critical connections that may not be evident on paper.


AI Agents Are Creating a New Security Nightmare for Enterprises and Startups

The agentic AI landscape is still in its nascent stages, making it the opportune moment for engineering leaders to establish robust foundational infrastructure. While the technology is rapidly evolving, the core patterns for governance are familiar: Proxies, gateways, policies, and monitoring. Organizations should begin by gaining visibility into where agents are already running autonomously — chatbots, data summarizers, background jobs — and add basic logging. Even simple logs like “Agent X called API Y” are better than nothing. Routing agent traffic through existing proxies or gateways in a reverse mode can eliminate immediate blind spots. Implementing hard limits on timeouts, max retries, and API budgets can prevent runaway costs. While commercial AI gateway solutions are emerging, such as Lunar.dev, teams can start by repurposing existing tools like Envoy, HAProxy, or simple wrappers around LLM APIs to control and observe traffic. Some teams have built minimal “LLM proxies” in days, adding logging, kill switches, and rate limits. Concurrently, defining organization-wide AI policies — such as restricting access to sensitive data or requiring human review for regulated outputs — is crucial, with these policies enforced through the gateway and developer training.


The Evolution of Software Testing in 2025: A Comprehensive Analysis

The testing community has evolved beyond the conventional shift-left and shift-right approaches to embrace what industry leaders term "shift-smart" testing. This holistic strategy recognizes that quality assurance must be embedded throughout the entire software development lifecycle, from initial design concepts through production monitoring and beyond. While shift-left testing continues to emphasize early validation during development phases, shift-right testing has gained equal prominence through its focus on observability, chaos engineering, and real-time production testing. ... Modern testing platforms now provide insights into how testing outcomes relate to user churn rates, release delays, and net promoter scores, enabling organizations to understand the direct business impact of their quality assurance investments. This data-driven approach transforms testing from a technical activity into a business-critical function with measurable value.Artificial intelligence platforms are revolutionizing test prioritization by predicting where failures are most likely to occur, allowing testing teams to focus their efforts on the highest-risk areas. ... Modern testers are increasingly taking on roles as quality coaches, working collaboratively with development teams to improve test design and ensure comprehensive coverage aligned with product vision. 


7 lessons I learned after switching from Google Drive to a home NAS

One of the first things I realized was that a NAS is only as fast as the network it’s sitting on. Even though my NAS had decent specs, file transfers felt sluggish over Wi-Fi. The new drives weren’t at fault, but my old router was proving to be a bottleneck. Once I wired things up and upgraded my router, the difference was night and day. Large files opened like they were local. So, if you’re expecting killer performance, make sure to look out for the network box, because it perhaps matters just as much  ... There was a random blackout at my place, and until then, I hadn’t hooked my NAS to a power backup system. As a result, the NAS shut off mid-transfer without warning. I couldn’t tell if I had just lost a bunch of files or if the hard drives had been damaged too — and that was a fair bit scary. I couldn’t let this happen again, so I decided to connect the NAS to an uninterruptible power supply unit (UPS).  ... I assumed that once I uploaded my files to Google Drive, they were safe. Google would do the tiring job of syncing, duplicating, and mirroring on some faraway data center. But in a self-hosted environment, you are the one responsible for all that. I had to put safety nets in place for possible instances where a drive fails or the NAS dies. My current strategy involves keeping some archived files on a portable SSD, a few important folders synced to the cloud, and some everyday folders on my laptop set up to sync two-way with my NAS.


5 key questions your developers should be asking about MCP

Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. ... Remote deployment obviously addresses the scaling but opens up a can of worms around transport complexity. The original HTTP+SSE approach was replaced by a March 2025 streamable HTTP update, which tries to reduce complexity by putting everything through a single /messages endpoint. Even so, this isn’t really needed for most companies that are likely to build MCP servers. But here’s the thing: A few months later, support is spotty at best. Some clients still expect the old HTTP+SSE setup, while others work with the new approach — so, if you’re deploying today, you’re probably going to support both. Protocol detection and dual transport support are a must. ... However, the biggest security consideration with MCP is around tool execution itself. Many tools need broad permissions to be useful, which means sweeping scope design is inevitable. Even without a heavy-handed approach, your MCP server may access sensitive data or perform privileged operations


Firmware Vulnerabilities Continue to Plague Supply Chain

"The major problem is that the device market is highly competitive and the vendors [are] competing not only to the time-to-market, but also for the pricing advantages," Matrosov says. "In many instances, some device manufacturers have considered security as an unnecessary additional expense." The complexity of the supply chain is not the only challenge for the developers of firmware and motherboards, says Martin Smolár, a malware researcher with ESET. The complexity of the code is also a major issue, he says. "Few people realize that UEFI firmware is comparable in size and complexity to operating systems — it literally consists of millions of lines of code," he says. ... One practice that hampers security: Vendors will often try to only distribute security fixes under a non-disclosure agreement, leaving many laptop OEMs unaware of potential vulnerabilities in their code. That's the exact situation that left Gigabyte's motherboards with a vulnerable firmware version. Firmware vendor AMI fixed the issues years ago, but the issues have still not propagated out to all the motherboard OEMs. ... Yet, because firmware is always evolving as better and more modern hardware is integrated into motherboards, the toolset also need to be modernized, Cobalt's Ollmann says.


Beyond Pilots: Reinventing Enterprise Operating Models with AI

Historically, AI models required vast volumes of clean, labeled data, making insights slow and costly. Large language models (LLMs) have upended this model, pre-trained on billions of data points and able to synthesize organizational knowledge, market signals, and past decisions to support complex, high-stakes judgment. AI is becoming a powerful engine for revenue generation through hyper-personalization of products and services, dynamic pricing strategies that react to real-time market conditions, and the creation of entirely new service offerings. More significantly, AI is evolving from completing predefined tasks to actively co-creating superior customer experiences through sophisticated conversational commerce platforms and intelligent virtual agents that understand context, nuance, and intent in ways that dramatically enhance engagement and satisfaction. ... In R&D and product development, AI is revolutionizing operating models by enabling faster go-to-market cycles. AI can simulate countless design alternatives, optimize complex supply chains in real time, and co-develop product features based on deep analysis of customer feedback and market trends. These systems can draw from historical R&D successes and failures across industries, accelerating innovation by applying lessons learned from diverse contexts and domains.


Alternative clouds are on the rise

Alt clouds, in their various forms, represent a departure from the “one size fits all” mentality that initially propelled the public cloud explosion. These alternatives to the Big Three prioritize specificity, specialization, and often offer an advantage through locality, control, or workload focus. Private cloud, epitomized by offerings from VMware and others, has found renewed relevance in a world grappling with escalating cloud bills, data sovereignty requirements, and unpredictable performance from shared infrastructure. The old narrative that “everything will run in the public cloud eventually” is being steadily undermined as organizations rediscover the value of dedicated infrastructure, either on-premises or in hosted environments that behave, in almost every respect, like cloud-native services. ... What begins as cost optimization or risk mitigation can quickly become an administrative burden, soaking up engineering time and escalating management costs. Enterprises embracing heterogeneity have no choice but to invest in architects and engineers who are familiar not only with AWS, Azure, or Google, but also with VMware, CoreWeave, a sovereign European platform, or a local MSP’s dashboard. 


Making security and development co-owners of DevSecOps

In my view, DevSecOps should be structured as a shared responsibility model, with ownership but no silos. Security teams must lead from a governance and risk perspective, defining the strategy, standards, and controls. However, true success happens when development teams take ownership of implementing those controls as part of their normal workflow. In my career, especially while leading security operations across highly regulated industries, including finance, telecom, and energy, I’ve found this dual-ownership model most effective. ... However, automation without context becomes dangerous, especially closer to deployment. I’ve led SOC teams that had to intervene because automated security policies blocked deployments over non-exploitable vulnerabilities in third-party libraries. That’s a classic example where automation caused friction without adding value. So the balance is about maturity: automate where findings are high-confidence and easily fixable, but maintain oversight in phases where risk context matters, like release gates, production changes, or threat hunting. ... Tools are often dropped into pipelines without tuning or context, overwhelming developers with irrelevant findings. The result? Fatigue, resistance, and workarounds.

Daily Tech Digest - June 02, 2023

A Data Scientist’s Essential Guide to Exploratory Data Analysis

Analyzing the individual characteristics of each feature is crucial as it will help us decide on their relevance for the analysis and the type of data preparation they may require to achieve optimal results. For instance, we may find values that are extremely out of range and may refer to inconsistencies or outliers. We may need to standardize numerical data or perform a one-hot encoding of categorical features, depending on the number of existing categories. Or we may have to perform additional data preparation to handle numeric features that are shifted or skewed, if the machine learning algorithm we intend to use expects a particular distribution. ... For Multivariate Analysis, best practices focus mainly on two strategies: analyzing the interactions between features, and analyzing their correlations. ... Interactions let us visually explore how each pair of features behaves, i.e., how the values of one feature relate to the values of the other. 


Resilient data backup and recovery is critical to enterprise success

So, what must IT leaders consider? The first step is to establish data protection policies that include encryption and least privilege access permissions. Businesses should then ensure they have three copies of their data – the production copy already exists and is effectively the first copy. The second copy should be stored on a different media type, not necessarily in a different physical location (the logic behind it is to not store your production and backup data in the same storage device). The third copy could or should be an offsite copy that is also offline, air-gapped, or immutable (Amazon S3 with Object Lock is one example). Organizations also need to make sure they have a centralized view of data protection across all environments for greater management, monitoring and governance, and they need orchestration tools to help automate data recovery. Finally, organizations should conduct frequent backup and recovery testing to make sure that everything works as it should.


Data Warehouse Architecture Types

Different architectural approaches offer unique advantages and cater to varying business requirements. In this comprehensive guide, we will explore different data warehouse architecture types, shedding light on their characteristics, benefits, and considerations. Whether you are building a new data warehouse or evaluating your existing architecture, understanding these options will empower you to make informed decisions that align with your organization’s goals. ... Selecting the right data warehouse architecture is a critical decision that directly impacts an organization’s ability to leverage its data assets effectively. Each architecture type has its own strengths and considerations, and there is no one-size-fits-all solution. By understanding the characteristics, benefits, and challenges of different data warehouse architecture types, businesses can align their architecture with their unique requirements and strategic goals. Whether it’s a traditional data warehouse, hub-and-spoke model, federated approach, data lake architecture, or a hybrid solution, the key is to choose an architecture that empowers data-driven insights, scalability, agility, and flexibility.


What is federated Identity? How it works and its importance to enterprise security

FIM has many benefits, including reducing the number of passwords a user needs to remember, improving their user experience and improving security infrastructure. On the downside, federated identity does introduce complexity into application architecture. This complexity can also introduce new attack surfaces, but on balance, properly implemented federated identity is a net improvement to application security. In general, we can see federated identity as improving convenience and security at the cost of complexity. ... Federated single sign-on allows for sharing credentials across enterprise boundaries. As such, it usually relies on a large, well-established entity with widespread security credibility, organizations such as Google, Microsoft, and Amazon, for example. In this case, applications are usually gaining not just a simplified login experience for their users, but the impression and actual reliance on high-level security infrastructure. Put another way, even a small application can add “Sign in with Google” to its login flow relatively easily, giving users a simple login option, which keeps sensitive information in the hands of the big organization.


Millions of PC Motherboards Were Sold With a Firmware Backdoor

Given the millions of potentially affected devices, Eclypsium’s discovery is “troubling,” says Rich Smith, who is the chief security officer of supply-chain-focused cybersecurity startup Crash Override. Smith has published research on firmware vulnerabilities and reviewed Eclypsium’s findings. He compares the situation to the Sony rootkit scandal of the mid-2000s. Sony had hidden digital-rights-management code on CDs that invisibly installed itself on users’ computers and in doing so created a vulnerability that hackers used to hide their malware. “You can use techniques that have traditionally been used by malicious actors, but that wasn’t acceptable, it crossed the line,” Smith says. “I can’t speak to why Gigabyte chose this method to deliver their software. But for me, this feels like it crosses a similar line in the firmware space.” Smith acknowledges that Gigabyte probably had no malicious or deceptive intent in its hidden firmware tool. But by leaving security vulnerabilities in the invisible code that lies beneath the operating system of so many computers, it nonetheless erodes a fundamental layer of trust users have in their machines. 


Minimising the Impact of Machine Learning on our Climate

There are several things we can do to mitigate the negative impact of software on our climate. They will be different depending on your specific scenario. But what they all have in common is that they should strive to be energy-efficient, hardware-efficient and carbon-aware. GSF is gathering patterns for different types of software systems; these have all been reviewed by experts and agreed on by all member organisations before being published. In this section we will cover some of the patterns for machine learning as well as some good practices which are not (yet?) patterns. If we divide the actions after the ML life cycle, or at least a simplified version of it, we get four categories: Project Planning, Data Collection, Design and Training of ML model and finally, Deployment and Maintenance. The project planning phase is the time to start asking the difficult questions, think about what the carbon impact of your project will be and how you plan to measure it. This is also the time to think about your SLA; overcommitting to strict latency or performance metrics that you actually don’t need can quickly become a source of emission you can avoid.


5 ways AI can transform compliance

Compliance is all about controls. Data must be classified according to multiple rules, and the movement and access to that data recorded. It’s the perfect task for AI. Ville Somppi, vice president of industry solutions at M-Files, says: “Thanks to AI, organisations can automatically classify information and apply pre-defined compliance rules. In the case of choosing the right document category from a compliance perspective, the AI can be trained quickly with a small sample set categorised by people. This is convenient, especially when people can still correct wrong suggestions in the beginning of the learning process. ... Data pools are too big for humans to comb through. AI is the only way. In some sectors, adoption of AI has been delayed owing to regulatory issues. However, full deployment ought now to be possible. Gabriel Hopkins chief product officer at Ripjar, says: “Banks and financial services companies face complex responsibilities when it comes to compliance activities, especially with regard to combatting the financing of terrorism and preventing laundering or criminal proceeds.


Former Uber CSO Sullivan on Engaging the Security Community

CISO is a lonely role. There's a really amazing camaraderie between security executives that I'm not sure exists in any other kind of leadership role. The CISO role is pretty new compared to the other leadership roles. It's far from settled what kind of background is ideal for the role. It's far from settled where the person in the role should report. It’s far from settled what kind of a budget you're going to get. It's far from settled in terms of what type of decision-making power you're going to have. So, as a result, I think security leaders often feel lonely and on an island. They have an executive team above them that expects them to know all the answers about security, and then they have a team underneath them that expects them to know all the answers about security. So, they can't betray ignorance to anybody without undermining their role. And so, the security leader community often turns to each other for support, for guidance. There are a good number of Slack channels and conferences that are just CISOs talking through the role and asking for best practices and advice on how to deal with hard situations.


Google Drive Deficiency Allows Attackers to Exfiltrate Workspace Data Without a Trace

Mitiga reached out to Google about the issue, but the researchers said they have not yet received a response, adding that Google's security team typically doesn't recognize forensics deficiencies as a security problem. This highlights a concern when working with software-as-a-service (SaaS) and cloud providers, in that organizations that use their services "are solely dependent on them regarding what forensic data you can have," Aspir notes. "When it comes to SaaS and cloud providers, we’re talking about a shared responsibility regarding security because you can't add additional safeguards within what is given." ... Fortunately, there are steps that organizations using Google Workspace can take to ensure that the issue outlined by Mitiga isn't exploited, the researchers said. This includes keeping an eye out for certain actions in their Admin Log Events feature, such as events about license assignments and revocations, they said.


How defense contractors can move from cybersecurity to cyber resilience

We’re thinking way too small about a coordinated cyberattack’s capacity for creating major disruption to our daily lives. One recent, vivid illustration of that fact happened in 2022, when the Russia-linked cybercrime group Conti launched a series of prolonged attacks on the core infrastructure of the country of Costa Rica, plunging the country into chaos for months. Over a period of two weeks, Conti tried to breach different government organizations nearly every day, targeting a total of 27 agencies. Soon after that, the group launched a separate attack on the country’s health care system, causing tens of thousands of appointments to be canceled and patients to experience delays in getting treatment. The country declared a national emergency and eventually, with the help of allies around the world including the United States and Microsoft, regained control of its systems. The US federal government’s strict compliance standards often impede businesses from excelling beyond the most basic requirements. 



Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley

Daily Tech Digest - February 01, 2023

Top 6 roadblocks derailing data-driven projects

Making the challenge of getting sufficient funding for data projects even more daunting is the fact that they can be expensive endeavors. Data-driven projects require a substantial investment of resources and budget from inception, Clifton says. “They are generally long-term projects that can’t be applied as a quick fix to address urgent priorities,” Clifton says. “Many decision makers don’t fully understand how they work or deliver for the business. The complex nature of gathering data to use it efficiently to deliver clear [return on investment] is often intimidating to businesses because one mistake can exponentially drive costs.” When done correctly, however, these projects can streamline and save the organization time and money over the long haul, Clifton says. “That’s why it is essential to have a clear strategy for maximizing data and then ensuring that key stakeholders understand the plan and execution,” he says. In addition to investing in the tools needed to support data-driven projects, organizations need to recruit and retain professionals such as data scientists. 


IoT, connected devices biggest contributors to expanding application attack surface

Along with IoT and connected device growth, rapid cloud adoption, accelerated digital transformation, and new hybrid working models have also significantly expanded the attack surface, the report noted.  ... Inefficient visibility and contextualization of application security risks leave organizations in “security limbo” because they don’t know what to focus on and prioritize, 58% of respondents said. “IT teams are being bombarded with security alerts from across the application stack, but they simply can’t cut through the data noise,” the report read. “It’s almost impossible to understand the risk level of security issues in order to prioritize remediation based on business impact. As a result, technologists are feeling overwhelmed by new security vulnerabilities and threats.” Lack of collaboration and understanding between IT operations teams and security teams is having several negative effects too, the report found, including increased vulnerability to security threats and blind spots, difficulties balancing speed, performance and security priorities, and slow reaction times when addressing security incidents.


Firmware Flaws Could Spell 'Lights Out' for Servers

Five vulnerabilities in the baseboard management controller (BMC) firmware used in servers of 15 major vendors could give attackers the ability to remotely compromise the systems widely used in data centers and for cloud services. The vulnerabilities, two of which were disclosed this week by hardware security firm Eclypsium, occur in system-on-chip (SoC) computing platforms that use AMI's MegaRAC Baseboard Management Controller (BMC) software for remote management. The flaws could impact servers produced by at least 15 vendors, including AMD, Asus, ARM, Dell, EMC, Hewlett-Packard Enterprise, Huawei, Lenovo, and Nvidia. Eclypsium disclosed three of the vulnerabilities in December, but withheld information on two additional flaws until this week in order to allow AMI more time to mitigate the issues. Since the vulnerabilities can only be exploited if the servers are connected directly to the Internet, the extent of the vulnerabilities is hard to measure, says Nate Warfield, director of threat research and intelligence at Eclypsium. 


As the anti-money laundering perimeter expands, who needs to be compliant, and how?

Remember: It’s not just existing criminals you’re looking for, but also people that could become part of a money laundering scheme. One very specific category is politically exposed persons (PEP), which refers to government workers or high-ranking officials at risk of bribery or corruption. Another category is people in sanctioned lists, like Specially Designated Nationals (SDN) composed by the Office of Foreign Assets Control (OFAC). They contain individuals and groups with links to high-risk countries. Extra vigilance is also necessary when dealing with money service businesses (MSB), as they’re more likely to become targets for money launderers. The point of all this is that a good AML program must include a thorough screening system that can detect high-risk customers before bringing them onboard. It’s great if you can stop criminals from accessing your system at all, but sometimes they slip through or influence existing customers. That’s why checking users’ backgrounds for red flags isn’t enough. You need to keep an eye on their current activity, too.


Digital transformation: 4 essential leadership skills

Decisiveness by itself is not enough. A strong technology leader needs to operate with flexibility. The pace of change is no longer linear, and leaders have less time to assess and understand every aspect of a decision. Consequently, decisions are made faster and are not always the best ones. Realizing which decisions are not spot-on and being able to adapt quickly is an example of the type of flexibility a leader needs. Another area leaders should understand is when, how, and from whom to take input when making adjustments. For example, leaders shouldn’t rely solely on customer input to make all product decisions. A flexible leader needs to understand the impact on the development teams and support teams as well. In our experience, teams with decisive and flexible leaders are more accepting of change. This is especially true during transformation. Leaders need to know when and how to be decisive to lead their team to success. In tandem, future-ready leaders can adapt to new information and inputs in today’s fast-paced technology environment.


Pathways to a More Sustainable Data Center

“When building a data center to suit today's needs and the needs 20 years in the future, the location of the facility is a key aspect,” he says. “Does it have space to expand with customer growth? Areas to remediate and replace systems and components? Is it in an area that has an extreme weather event seasonally? Are there ways to bring more power to the facility with this growth?” He says these are just a few of the questions that need to be thought of when deploying and maintaining a data center long term. "Technology may be able to stretch the limits of what’s possible, but sustainability starts with people,” Malloy adds. “Employees that implement and follow data center best practices keep a facility running in peak performance.” He says implementing simple things such as efficient lighting, following management-oriented processes and support-oriented processes for a proper maintenance and part replacement schedule increase the longevity of the facility equipment and increase customer satisfaction. 


Enterprise architecture modernizes for the digital era

Although leading enterprise architects see the need for a tool that better reflects the way they work, they also have concerns. “Provenance and credibility are key, so you risk making the wrong decisions as an enterprise architect if there’s no accuracy in the data,” Gregory says of how EAM tools are reliant on data quality. Winfield agrees, adding: “The difficult bit is getting accurate data into the EAM.” Gartner, in its Magic Quadrant for EA Tools, reports that the EAM sector could face some consolidation, too: “Due to the importance and growth in use of models in modern business, we expect to see some major vendors in adjacent market territories make strategic moves by either buying or launching their own EA tools.” Still, some CIOs question the value of adding EAM tools to their technology portfolio alongside IT service management (ITSM) tools, for example. The Very Group’s Subburaj foresees this being a challenge. “Some business leaders will struggle to see the direct business impact,” he says. 


Career path to CTO – we map out steps to take

Successful CTOs will need a range of skills, including technical but also business attributes. “The ability to advise and steer the technology strategy that is right for the business in the current and changing market conditions is crucial,” says Ryan Sheldrake, field CTO, EMEA, at cloud security firm Lacework. “Spending and investing wisely and in a timely manner is one of the more finessed parts of being a successful CTO.” ... “To achieve a promotion to this level, you need both,” she says. “For most of the CTO assignments we deliver, a solid knowledge base in software engineering, technical, product and enterprise architecture is required, as well as knowledge of cloud technologies and information security. From a leadership perspective, candidates need excellent influencing skills, strategic thinking, commercial management skills, and the gravitas to convey a vision and motivate a team.” There are ways in which individuals can help themselves stand out. “One of the critical things I did that really helped me develop into a CTO was to have an external mentor who was already a CTO,” says Mark Benson, CTO at Logicalis UKI. 


How Good Data Management Enables Effective Business Strategies

Data governance should also not be overlooked as an important component of data management and data quality. Sometimes used interchangeably, there are important differences. If data quality, as we’ve seen, is about making sure that all data owned by an organization is complete, accurate, and ready for business use, data governance, by contrast, is about creating the framework and rules by which an organization will use the data. The main purpose of data governance is to ensure the necessary data informs crucial business functions. It is a continuous process of assessing, often through a data steward, whether data that has been cleansed, matched, merged, and made ready for business use is truly fit for its intended purpose. Data governance rests on a steady supply of high-quality data, with frameworks for security, privacy, permissions, access, and other operational concerns. A data management strategy that encompasses the elements described above with respect to data quality will empower a business environment that can successfully achieve and even surpass business goals – from improving customer and employee experiences to increasing revenue and everything in between.


What Is Policy-as-Code? An Introduction to Open Policy Agent

As business, teams, and maturity progress, we'll want to shift from manual policy definition to something more manageable and repeatable at the enterprise scale. How do we do that? First, we can learn from successful experiments in managing systems at scale:Infrastructure-as-Code (IaC): treat the content that defines your environments and infrastructure as source code. DevOps: the combination of people, process, and automation to achieve "continuous everything," continuously delivering value to end users. Policy as code uses code to define and manage policies, which are rules and conditions. Policies are defined, updated, shared, and enforced using code and leveraging Source Code Management (SCM) tools. By keeping policy definitions in source code control, whenever a change is made, it can be tested, validated, and then executed. The goal of PaC is not to detect policy violations but to prevent them. This leverages the DevOps automation capabilities instead of relying on manual processes, allowing teams to move more quickly and reducing the potential for mistakes due to human error.



Quote for the day:

"Those who are not true leaders will just affirm people at their own immature level." -- Richard Rohr

Daily Tech Digest - December 31, 2021

Can blockchain solve its oracle problem?

The so-called oracle problem may not be intractable, however — despite what Song suggests. “Yes, there is progress,” says Halaburda. “In supply-chain oracles, we have for example sensors with their individual digital signatures. We are learning about how many sensors there need to be, and how to distinguish manipulation from malfunction from multiple readings.” “We are also getting better in writing contracts taking into account these different cases, so that the manipulation is less beneficial,” Halaburda continues. “In DeFi, we also have multiple sources, and techniques to cross-validate. While we are making progress, though, we haven’t gotten to the end of the road yet.” As noted, oracles are critical to the emerging DeFi sector. “In order for DeFi applications to work and provide value to people and organizations around the world, they require information from the real world — like pricing data for derivatives,” Sam Kim, partner at Umbrella Network — a decentralized layer-two oracle solution — tells Magazine, adding:


Putting the trust back in software testing in 2022

Millions of organisations rely on manual processes to check the quality of their software applications, despite a fully manual approach presenting a litany of problems. Firstly, with more than 70% of outages caused by human error, testing software manually still leaves companies highly prone to issues. Secondly, it is exceptionally resource-intensive and requires specialist skills. Given the world is in the midst of an acute digital talent crisis, many businesses lack the personnel to dedicate to manual testing. Compounding this challenge is the intrinsic link between software development and business success. With companies coming under more pressure than ever to release faster and more regularly, the sheer volume of software needing testing has skyrocketed, placing a further burden on resources already stretched to breaking point. Companies should be testing their software applications 24/7 but the resource-heavy nature of manual testing makes this impossible. It is also demotivating to perform repeat tasks, which generally leads to critical errors in the first place. 


December 2021 Global Tech Policy Briefing

CISA and the National Security Administration (NSA), in the meantime, offered a second revision to their 5G cybersecurity guidance on December 2. According to CISA’s statement, “Devices and services connected through 5G networks transmit, use, and store an exponentially increasing amount of data. This third installment of the Security Guidance for 5G Cloud Infrastructures four-part series explains how to protect sensitive data from unauthorized access.” The new guidelines run on zero-trust principles and reflect the White House’s ongoing concern with national cybersecurity. ... On December 9, the European Commission proposed a new set of measures to ensure labor rights for people working on digital platforms. The proposal will focus on transparency, enforcement, traceability, and the algorithmic management of what it calls, in splendid Eurocratese, “digital labour platforms.” The number of EU citizens working for digital platforms has grown 500 percent since 2016, reaching 28 million, and will likely hit 43 million by 2025. Of the current 28 million, 59 percent work with clients or colleagues in another country. 


10 Predictions for Web3 and the Cryptoeconomy for 2022

Institutions will play a much bigger role in Defi participation — Institutions are increasingly interested in participating in Defi. For starters, institutions are attracted to higher than average interest-based returns compared to traditional financial products. Also, cost reduction in providing financial services using Defi opens up interesting opportunities for institutions. However, they are still hesitant to participate in Defi. Institutions want to confirm that they are only transacting with known counterparties that have completed a KYC process. Growth of regulated Defi and on-chain KYC attestation will help institutions gain confidence in Defi. ...  Defi insurance will emerge — As Defi proliferates, it also becomes the target of security hacks. According to London-based firm Elliptic, total value lost by Defi exploits in 2021 totaled over $10B. To protect users from hacks, viable insurance protocols guaranteeing users’ funds against security breaches will emerge in 2022. ... NFT Based Communities will give material competition to Web 2.0 social networks — NFTs will continue to expand in how they are perceived.


Firmware attack can drop persistent malware in hidden SSD area

Flex capacity is a feature in SSDs from Micron Technology that enables storage devices to automatically adjust the sizes of raw and user-allocated space to achieve better performance by absorbing write workload volumes. It is a dynamic system that creates and adjusts a buffer of space called over-provisioning, typically taking between 7% and 25% of the total disk capacity. The over-provisioning area is invisible to the operating system and any applications running on it, including security solutions and anti-virus tools. As the user launches different applications, the SSD manager adjusts this space automatically against the workloads, depending on how write or read-intensive they are. ... One attack modeled by researchers at Korea University in Seoul targets an invalid data area with non-erased information that sits between the usable SSD space and the over-provisioning (OP) area, and whose size depends on the two. The research paper explains that a hacker can change the size of the OP area by using the firmware manager, thus generating exploitable invalid data space.


'Businesses need to build threat intelligence for cybersecurity': Dipesh Kaura, Kaspersky

Organizations across industries are faced with the challenge of cybersecurity and the need to build threat intelligence holds equal importance for every business that thrives in a digital economy. While building threat intelligence is crucial, it is also necessary to have a solution that understands the threat vectors for every business, across every industry. A holistic threat intelligence solution looks at every nitty-gritty of an enterprise's security framework and gets the best actionable insights. A threat intelligence platform must capture and monitor real-time feeds from across an enterprise's digital footprint and turn them into insights to build a preventive posture, instead of a reactive one. It must diagnose and analyze security incidents on hosts and the network and signals from internal systems against unknown threats, thereby minimizing incident response time and disrupt the kill chain before critical systems and data are compromised. 


IT leadership: 3 ways to show gratitude to teams

If someone on your team takes initiative on a project, let them know that you appreciate them. Pull them aside, look them in the eye and speak truthfully about how much their extra effort means to you, the team, and the company. Make your thank-you’s genuine, direct, and personal. Most individuals value physical tokens of appreciation in addition to expressed gratitude. If you choose to offer a gift, make it as personalized as you can. For example, an Amazon gift card is nice – but a cake from their favorite bakery is even nicer. Personalization means that you’ve thought about them as a person, taken the time to consider what they like, and recognize their contributions as an individual. Contrary to the common belief that we should be lavish with our praises, I would argue that it’s better to be selective. Recognize behavior that lives up to your company’s values and reserve the recognition for situations where it is genuinely deserved. If a leader showers praise when it’s not really warranted, they devalue the praise that is given when team members actually go above and beyond.


Top 5 AI Trends That Will Shape 2022 and Beyond

Under the umbrella of technology, there are several terms with which you must be already familiar, such as artificial intelligence, machine learning, deep learning, blockchain technology, cognitive technology, data processing, data science, big data, and the list is endless. Just imagine, how would it be to survive in the pandemic outbreak if there would be no technology? What if there would be no laptops, PCs, tablets, smartphones, or any sort of gadgets during COVID-19? How would human beings earn for their survival and living? What if there would be no Netflix to binge-watch or no social media application during coronavirus? Undoubtedly, that’s extremely intimidating and intriguing at the same time. Isn’t it giving you goosebumps wondering how fast the technology is advancing? Let’s flick through some jaw-dropping statistics first. Did you know that there are more than 4.88 billion mobile phone users all across the world now? According to the technology growth statistics, almost 62% of the world’s population own a smartphone device.


Introducing the Trivergence: Transformation driven by blockchain, AI and the IoT

Blockchain is the distributed ledger technology underpinning the cryptocurrency revolution. We call it the internet of value because people can use blockchain for much more than recording crypto transactions. Distributed ledgers can store, manage and exchange anything of value — money, securities, intellectual property, deeds and contracts, music, votes and our personal data — in a secure, private and peer-to-peer manner. We achieve trust not necessarily through intermediaries like banks, stock exchanges or credit card companies but through cryptography, mass collaboration and some clever code. In short, blockchain software aggregates transaction records into batches or “blocks” of data, links and time stamps the blocks into chains that provide an immutable record of transactions with infinite levels of privacy or transparency, as desired. Each of these foundational technologies is uniquely and individually powerful. However, when viewed together, each is transformed. This is a classic case of the whole being greater than the sum of its parts.


Sustainability will be a key focus as the transport sector transitions in 2022

Delivery is also an area where we expect to see the movement towards e-fleets grow. We’ve already seen this being trialled, with parcel-delivery company DPD making the switch to a fully electric fleet in Oxford. It’s estimated that by replicating this in more cities, DPD could reduce CO2 by 42,000 tonnes by 2025. While third-party delivery companies offer retailers an efficient service, carrying as many as 320 parcels a day, this model is challenged when it comes to customers’ growing expectations they can receive deliveries within hours. Sparked by lockdowns, which led to a 48% increase in online shopping, the “rapid grocery delivery” trend looks set grow in 2022. Grocery delivery company Getir, for example, built a fleet of almost 1,000 vehicles in 2021 to service this need – and is planning to spend £100m more to expand its offering. Given the current driver recruitment crisis, which is currently affecting delivery and taxi firms, we are not expecting many other operators to invest that kind of money into building new fleets though. Instead, you are more likely to see retailers working with existing fleets. 



Quote for the day:

"Cream always rises to the top...so do good leaders." -- John Paul Warren