Showing posts with label disaster. Show all posts
Showing posts with label disaster. Show all posts

Daily Tech Digest - December 30, 2025


Quote for the day:

“It is never too late to be what you might have been.” -- George Eliot


Cybersecurity Trends: What's in Store for Defenders in 2026?

For hackers of all stripes, a ready supply of easily procured, useful tools abounds. Numerous breaches trace to information stealing malware, which grabs credentials from a system, or log. Automated "clouds of logs" make it easy for info stealer subscribers to monetize their attacks. ... Clop, aka Cl0p, again stole data and held it for ransom. How many victims paid a ransom isn't known, although the group's repeated ability to pay for zero-days suggests it's making a tidy profit. Other cybercrime groups appear to have learned from Clop's successes, including The Com cybercrime collective spinoff lately calling itself Scattered Lapsus$ Hunters. One repeat target of that group has been third-party software that connects to customer relationship management software platform Salesforce, allowing them to steal OAuth tokens and gain access to Salesforce instances and customer data. ... Beyond the massive potential illicit revenue being earned by these teenagers, what's also notable is the sheer brutality of many of these attacks, such as data breaches involving children's nurseries including Kiddo and disrupting the British economy to the tune of $2.5 billion through a single attack against Jaguar Land Rover that shut down assembly lines and supply chains. ... Well-designed defenses help blunt many an attacker, or at least slow an intrusion. Enforcing least-privileged access to resources and multifactor authentication always helps, as do concrete security practices designed to block CEO fraud, tricking help desk ploys and other forms of forms of social engineering.


4 New Year’s resolutions for devops success

“Develop a growth mindset that AI models are not good or bad, but rather a new nondeterministic paradigm in software that can both create new issues and new opportunities,” says Matthew Makai, VP of developer relations at DigitalOcean. “It’s on devops engineers and teams to adapt to how software is created, deployed, and operated.” ... A good place to start is improving observability across APIs, applications, and automations. “Developers should adopt an AI-first, prevention-first mindset, using observability and AIops to move from reactive fixes to proactive detection and prevention of issues,” says Alok Uniyal, SVP and head of process consulting at Infosys. ... “Integrating accessibility into the devops pipeline should be a top resolution, with accessibility tests running alongside security and unit tests in CI as automated testing and AI coding tools mature,” says Navin Thadani, CEO of Evinced. “As AI accelerates development, failing to fix accessibility issues early will only cause teams to generate inaccessible code faster, making shift-left accessibility essential. Engineers should think hard about keeping accessibility in the loop, so the promise of AI-driven coding doesn’t leave inclusion behind.” ... For engineers ready to step up into leadership roles but concerned about taking on direct reports, consider mentoring others to build skills and confidence. “There is high-potential talent everywhere, so aside from learning technical skills, I would challenge devops engineers to also take the time to mentor a junior engineer in 2026,” says Austin Spires


New framework simplifies the complex landscape of agentic AI

Agent adaptation involves modifying the foundation model that underlies the agentic system. This is done by updating the agent’s internal parameters or policies through methods like fine-tuning or reinforcement learning to better align with specific tasks. Tool adaptation, on the other hand, shifts the focus to the environment surrounding the agent. Instead of retraining the large, expensive foundation model, developers optimize the external tools such as search retrievers, memory modules, or sub-agents. ... If the agent struggles to use generic tools, don't retrain the main model. Instead, train a small, specialized sub-agent (like a searcher or memory manager) to filter and format data exactly how the main agent likes it. This is highly data-efficient and suitable for proprietary enterprise data and applications that are high-volume and cost-sensitive. Use A1 for specialization: If the agent fundamentally fails at technical tasks you must rewire its understanding of the tool's "mechanics." A1 is best for creating specialists in verifiable domains like SQL or Python or your proprietary tools. For example, you can optimize a small model for your specific toolset and then use it as a T1 plugin for a generalist model. Reserve A2 (agent output signaled) as the "nuclear option": Only train a monolithic agent end-to-end if you need it to internalize complex strategy and self-correction. This is resource-intensive and rarely necessary for standard enterprise applications.


Radio signals could give attackers a foothold inside air-gapped devices

For an attack to work, sensitivity needs to be predictable. Multiple copies of the same board model were tested using the same configurations and signal settings. Several sensitivity patterns appeared consistently across samples, meaning an attacker could characterize one device and apply those findings to another of the same model. They also measured stability over 24 hours to assess whether the effect persisted beyond short test windows. Most sensitive frequency regions remained consistent over time, with modest drift in some paths ... Once sensitive paths were identified, the team tested data reception. They used on-off keying, where the transmitter switches a carrier on for a one and off for a zero. This choice matched the observed behavior, which distinguishes between presence and absence of a signal. Under ideal synchronization, several paths achieved bit error rates below 1 percent when estimated received power reached about 10 milliwatts. One path stayed below 2 percent at roughly 1 milliwatt. Bandwidth tests showed that symbol rates up to 100 kilobits per second remained distinguishable, even as transitions blurred at higher rates. In a longer test, the researchers transmitted about 12,000 bits at 1 kilobit per second. At three meters, reception produced no errors. At 20 meters, the bit error rate reached about 6.2 percent. Errors appeared in bursts that standard error correction could address.


Smart Companies Are Taking SaaS In-House with Agentic Development

The uncomfortable truth: when your critical business processes depend on an AI SaaS vendor’s survival, you’ve outsourced your competitive advantage to their cap table. ... But the deeper risk isn’t operational disruption — it’s strategic surrender. When you pipe your proprietary business context through external AI platforms, you’re training their models on your differentiation. You’re converting what should be permanent strategic assets into recurring operational expenses that drag down EBITDA. For companies evaluating AI SaaS alternatives, the real question is no longer whether to build or buy — but what parts of the AI stack must be owned to protect long‑term competitive advantage. ... “Who maintains these apps?” It’s the right question, with a surprising answer: 1. SaaS Maintenance Isn’t Free — Vendors deprecate APIs, change pricing, pivot features. Your team still scrambles to adapt. Plus, the security risk often comes from having an external third party connecting to internal data. 2. Agents Lower Maintenance Costs Dramatically — Updating deprecated libraries? Agents excel at this, especially with typed languages. The biggest hesitancy — knowledge loss when developers leave — evaporates when agents can explain the codebase to anyone. 3. You Control the Update Schedule — With owned infrastructure, you decide when to upgrade dependencies, refactor components, or add features. No vendor forcing breaking changes on their timeline.


6 cyber insurance gotchas security leaders must avoid

Before committing to a specific insurer, Lindsay recommends consulting an attorney with experience in cyber insurance contracts. “A policy is a legal document with complex definitions,” he notes. “An attorney can flag ambiguous terms, hidden carve-outs, or obligations that could create disputes at claim time,” Lindsay says. ... It’s hardly surprising, but important to remember, that the language contained in cybersecurity policies generally favors the insurer, not the insured. “Businesses often misinterpret the language from their perspective and overlook the risks that the very language of the policy creates,” Polsky warns. ... You may believe your policy will cover all cyberattack losses, yet a look at the fine print may revealed that it’s riddled with exclusions and warranties that can’t be realistically met, particularly in areas such as social engineering, ransomware, and business interruption. ... Many enterprises believe they’re fully secure, yet when they file a claim the insurer points to the fine print about security measures you didn’t know were required, Mayo says. “Now you’re stuck with cleanup costs, legal fees, and potential lawsuits — all without support from your insurance provider.” ... The retroactive date clause can be the biggest cyber insurance trap, warns Paul Pioselli, founder and CEO of cybersecurity services firm Solace. ... Perhaps the biggest mistake an insurance seeker can make is failing to understand the difference between first-party coverage and third-party coverage, and therefore failing to acquire a policy that includes both, says Dylan Tate


7 major IT disasters of 2025

In July, US cleaning product vendor Clorox filed a $380 million lawsuit against Cognizant, accusing the IT services provider’s helpdesk staff of handing over network passwords to cybercriminals who called and asked for them. ... Zimmer Biomet, a medical device company, filed a $172 million lawsuit against Deloitte in September, accusing the IT consulting company of failing to deliver promised results in a large-scale SAP S/4HANA deployment. ... In September, a massive fire at the National Information Resources Service (NIRS) government data center in South Korea resulted in the loss of 858TB of government data stored there. ... Multiple Google cloud services, including Gmail, Docs, Drive, Maps, and Gemini, were taken down during a massive outage in June. The outage was triggered by an earlier policy change to Google Service Control, a control plan service that provides functionality for managed services, with a null-pointer crash loop breaking APIs across several products. ... In late October, Amazon Web Services’ US-EAST-1 region was hit with a significant outage, lasting about three hours during early morning hours. The problem was related to DNS resolution of the DynamoDB API endpoint in the region, causing increased error rates, latency, and new instance launch failures for multiple AWS services. ... In late July, services in Microsoft’s Azure East US region were disrupted, with customers experiencing allocation failures when trying to create or update virtual machines. The problem? A lack of capacity, with a surge in demand outstripping Microsoft’s computing resources.


Stop Guessing, Start Improving: Using DORA Metrics and Process Behavior Charts

The DORA framework consists of several key metrics. Among them, Change Lead Time (CLT) shows how quickly a team can deliver change. Deployment Frequency (DF) shows what the team actually delivers. While important, DF is often more volatile, influenced by team size, vacations, and the type of work being done. Finally, the instability metrics and reliability SLOs serve as a counterbalance. ... Beyond spotting special causes, PBCs are also useful for detecting shifts, moments when the entire system moves to a new performance level. In the commute example above, these shifts appear as clear drops in the average commute time whenever a real improvement is introduced, such as buying a bike or finding a shorter route. Technically, a shift occurs when several consecutive points fall above or below the previous mean, signaling that the process has fundamentally changed. ... Sustainable improvement is rarely linear. It depends on a series of strategic bets whose effects emerge over time. Some succeed, others fail, and external factors, from tooling changes to team turnover, often introduce temporary setbacks. ... According to DORA research, these metrics have a predictive relationship with broader outcomes such as organizational performance and team well-being. In other words, teams that score higher on DORA metrics are statistically more likely to achieve better business results and report higher satisfaction.


5 Threats That Defined Security in 2025

Salt Typhoon is a Chinese state-sponsored threat actor best known in recent memory for targeting telecom giants — including Verizon, AT&T, Lumen Technologies, and multiple others — discovered last fall, targeting the systems used by police for court-authorized wiretapping. The group, also known as Operator Panda, uses sophisticated techniques to conduct espionage against targets and pre-position itself for longer-term attacks. ... CISA layoffs, indirectly, mark a threat of a different kind. At the beginning of the year, the Trump administration cut all advisory committee members within the Cyber Safety Review Board (CSRB), a group run by public and private sector experts to research and make judgments about large issues of the moment. As the CSRB was effectively shuttered, it was working on a report about Salt Typhoon. ... React2Shell describes CVE-2025-55182, a vulnerability disclosed early this month affecting the React Server Components (RSC) open source protocol. Caused by unsafe deserialization, vulnerability was considered easily exploitable and highly dangerous, earning it a maximum CVSS score of 10. Even worse, React is fairly ubiquitous, and at the time of disclosure it was thought that a third of cloud providers were vulnerable. ... In September, a self-replicating malware emerged known as Shai-Hulud. It's an infostealer that infects open source software components; when a user downloads a package infected by the worm, Shai-Hulud infects other packages maintained by the user and publishes poisoned versions, automatically and without much direct attacker input. 


How data-led intelligence can help apparel manufacturers and retailers adapt faster to changing consumer behaviour

AI is already helping retail businesses to understand the complex buying patterns of India’s diverse population. To predict demand, big box chains such as Reliance Retail and e-commerce leaders like Flipkart use machine learning algorithms to analyse historical sales, search patterns and even social media conversations. ... With data-led intelligence studying real-time demand signals, manufacturers can adjust their lines much sooner. If data shows a rising preference for electric scooters in certain cities, for instance, factories can scale up output before the trend peaks. And when interest in a product starts dipping, production can be slowed to prevent excess stock. ... One of the strongest outcomes of the AI wave is its ability to bring consumer demand and industrial supply onto the same page. In the past, customer preferences often evolved faster than factories could react, creating gaps between what buyers wanted and what stores stocked. AI has made this far easier to manage. Manufacturers and retailers now share richer data and insights across the supply chain, allowing production teams to plan with far better clarity. This also enhances supply chain transparency, a growing priority for global buyers seeking traceability. ... If data intelligence tools notice a sharp rise in conversations around eco-friendly packaging or sustainable clothing, retailers can adjust their marketing and stock in advance, while manufacturers source greener materials and redesign processes to match the growing interest.

Daily Tech Digest - May 21, 2025


Quote for the day:

"A true dreamer is one who knows how to navigate in the dark." -- John Paul Warren


How Microsoft wants AI agents to use your PC for you

Microsoft’s concept revolves around the Model Context Protocol (MCP), which was created by Anthropic (the company behind the Claude chatbot) last year. That’s an open-source protocol that AI apps can use to talk to other apps and web services. Soon, Microsoft says, you’ll be able to let a chatbot — or “AI agent” — connect to apps running on your PC and manipulate them on your behalf. ... Compared to what Microsoft is proposing, past “agentic” AI solutions that promised to use your computer for you aren’t quite as compelling. They’ve relied on looking at your computer’s screen and using that input to determine what to click and type. This new setup, in contrast, is neat — if it works as promised — because it lets an AI chatbot interact directly with any old traditional Windows PC app. But the Model Context Protocol solution is even more advanced and streamlined than that. Rather than a chatbot having to put together a Spotify playlist by dragging and dropping songs in the old-fashioned way, it would give the AI the ability to give instructions to the Spotify app in a more simplified form. On a more technical level, Microsoft will let application developers make their applications function as MCP servers — a fancy way of saying they’d act like a bridge between the AI models and the tasks they perform. 


How vulnerable are undersea cables?

The only way to effectively protect a cable against sabotage is to bury the entire cable, says Liwång, which is not economically justifiable. In the Baltic Sea, it is easier and more sensible to repair the cables when they break, and it is more important to lay more cables than to try to protect a few.
Burying all transoceanic cables is hardly feasible in practice either. ... “Cable breaks are relatively common even under normal circumstances. In terrestrial networks, they can be caused by various factors, such as excavators working near the fiber installation and accidentally cutting it. In submarine cables, cuts can occur, for example due to irresponsible use of anchors, as we have seen in recent reports,” says Furdek Prekratic. Network operators ensure that individual cable breaks do not lead to widespread disruptions, she notes: “Optical fiber networks rely on two main mechanisms to handle such events without causing a noticeable disruption to public transport. The first is called protection. The moment an optical connection is established over a physical path between two endpoints, resources are also allocated to another connection that takes a completely different path between the same endpoints. If a failure occurs on any link along the primary path, the transmission quickly switches to the secondary path. The second mechanism is called failover. Here, the secondary path is not reserved in advance, but is determined after the primary path has suffered a failure.” 


Driving business growth through effective productivity strategies

In times of economic uncertainty, it is to be expected that businesses grow more cautious with their spending. However, this can result in missed opportunities to improve productivity in favour of cost reductions. While cutting costs can seem an attractive option in light of economic doubts, it is merely a short-term solution. When businesses hold back from knee-jerk reactions and maintain a focus on sustainable productivity gains, they will find themselves reaping rewards in the long term. Strategic investments in technology solutions are essential to support businesses in driving their productivity strategies forward. With new technology constantly being introduced, there are a lot of options for business decision makers to consider. Most obviously, there are technology features in our ERP systems, and in our project management and collaboration tools, that can be used to facilitate significant flexibility or performance advantages compared to legacy approaches and processes. ... While technology is a vital part of any innovative productivity model, it’s just one piece of the puzzle. It is no use installing modern technology if internal processes remain outdated. Businesses must also look to weed out inefficient practices to improve and streamline resource management. 


Synthetic data’s fine line between reward and disaster

Generating large volumes of training data on demand is appealing compared to slow, expensive gathering of real-world data, which can be fraught with privacy concerns, or just not available. Synthetic data ought to help preserve privacy, speed up development, and be more cost effective for long-tail scenarios enterprises couldn’t otherwise tackle, she adds. It can even be used for controlled experimentation, assuming you can make it accurate enough. Purpose-built data is ideal for scenario planning and running intelligent simulations, and synthetic data detailed enough to cover entire scenarios could predict future behavior of assets, processes, and customers, which would be invaluable for business planning. ... Created properly, synthetic data mimics statistical properties and patterns of real-world data without containing actual records from the original dataset, says Jarrod Vawdrey, field chief data scientist at Domino Data Lab. And David Cox, VP of AI Models at IBM Research suggests viewing it as amplifying rather than creating data. “Real data can be extremely expensive to produce, but if you have a little bit of it, you can multiply it,” he says. “In some cases, you can make synthetic data that’s much higher quality than the original. The real data is a sample. It doesn’t cover all the different variations and permutations you might encounter in the real world.”


AI Interventions to Reduce Cycle Time in Legacy Modernization

As the software becomes difficult to change, businesses may choose to tolerate conceptual drift or compensate for it through their operations. When the difficulty of modifying the software poses a significant enough business risk, a legacy modernization effort is undertaken. Legacy modernization efforts showcase the problem of concept recovery. In these circumstances, recovering a software system’s underlying concept is the labor-intensive bottleneck step to any change. Without it, the business risks a failed modernization or losing customers that depend on unknown or under-considered functionality. ... The goal of a software modernization’s design phase is to perform enough validation of the approach to be able to start planning and development while minimizing the amount of rework that could result due to missed information. Traditionally, substantial lead time is spent in the design phase inspecting legacy source code, producing a target architecture, and collecting business requirements. These activities are time-intensive, mutually interdependent, and usually the bottleneck step in modernization. While exploring how to use LLMs for concept recovery, we encountered three challenges to effectively serving teams performing legacy modernizations: which context was needed and how to obtain it, how to organize context so humans and LLMs can both make use of it, and how to support iterative improvement of requirements documents. 


OWASP proposes a way for enterprises to automatically identify AI agents

“The confusion about ANS versus protocols like MCP, A2A, ACP, and Microsoft Entra is understandable, but there’s an important distinction to make: ANS is a discovery service, not a communication protocol,” Narajala said. “MCP, A2A and ACP define how agents talk to each other once connected, like HTTP for web. ANS defines how agents find and verify each other before communication, like DNS for web. Microsoft Entra provides identity services, but primarily within Microsoft’s ecosystem.” ... “We’re fast approaching the point where the need for a standard to identify AI agents becomes painfully obvious. Right now, it’s a mess. Companies are spinning up agents left and right, with no trusted way to know what they are, what they do, or who built them,” Tvrdik said. “The Wild West might feel exciting, but we all know how most of those stories end. And it’s not secure.” As for ANS, he said. “it makes sense in theory. Treat agents like domains. Give them names, credentials, and a way to verify who’s talking to what. That helps with security, sure, but also with keeping things organized. Without it, we’re heading into chaos.” But Tvrdik stressed that the deployment mechanisms will ultimately determine if ANS works.


Driving DevOps With Smart, Scalable Testing

Testing apps manually isn’t easy and consumes a lot of time and money. Testing complex ones with frequent releases requires an enormous number of human hours when attempted manually. This will affect the release cycle, results will take longer to appear, and if shown to be a failure, you’ll need to conduct another round of testing. What’s more, the chances of doing it correctly, repeatedly and without any human error, are highly unlikely. Those factors have driven the development of automation throughout all phases of the testing process, ranging from infrastructure builds to actual testing of code and applications. As for who should write which tests, as a general rule of thumb, it’s a task best-suited to software engineers. They should create unit and integration tests as well as UI e2e tests. QA analysts should also be tasked with writing UI E2E tests scenarios together with individual product owners. QA teams collaborating with business owners enhance product quality by aligning testing scenarios with real-world user experiences and business objectives. ... AWS CodePipeline can provide completely managed continuous delivery that creates pipelines, orchestrates and updates infrastructure and apps. It also works well with other crucial AWS DevOps services, while integrating with third-party action providers like Jenkins and Github. 


Bridging the Digital Divide: Understanding APIs

While both Event-Driven Architecture (EDA) and Data-Driven Architecture (DDA) are crucial for modern enterprises, they serve distinct purposes, operate on different core principles, and manifest through different architectural characteristics. Understanding these differences is key for enterprise architects to effectively leverage their individual strengths and potential synergies. While EDA is often highly operational and tactical, facilitating immediate responses to specific triggers, DDA can span both operational and strategic domains. A key differentiator between the two lies in the “granularity of trigger.” EDA is typically triggered by fine-grained, individual events—a single mouse click, a specific sensor reading, a new message arrival. Each event is a distinct signal that can initiate a process. DDA, on the other hand, often initiates its processes or derives its triggers from aggregated data, identified patterns, or the outcomes of analytical models that have processed numerous data points. For example, an analytical process in DDA might be triggered by the availability of a complete daily sales dataset, or an alert might be generated when a predictive model identifies an anomaly based on a complex evaluation of multiple data streams over time. This distinction in trigger granularity directly influences the design of processing logic, the selection of underlying technologies, and the expected immediacy and nature of the system’s response.


What good threat intelligence looks like in practice

The biggest shortcoming is often in the last mile, connecting intelligence to real-time detection, response, and risk mitigation. Another challenge is organizational silos. In many environments, the CTI team operates separately from SecOps, incident response, or threat hunting teams. Without seamless collaboration between those functions, threat intelligence remains a standalone capability rather than a force multiplier. This is often where threat intelligence teams can be challenged to demonstrate value into security operations. ... Rather than picking one over the other, CISOs should focus on blending these sources and correlating them with internal telemetry. The goal is to reduce noise, enhance relevance, and produce enriched insights that reflect the organization’s actual threat surface. Feed selection should also consider integration capabilities — intelligence is only as useful as the systems and people that can act on it. When threat intelligence is operationalized, a clear picture can be formed from the variety of available threat feeds. ... The threat intel team should be seen not as another security function, but as a strategic partner in risk reduction and decision support. CISOs can encourage cross-functional alignment by embedding CTI into security operations workflows, incident response playbooks, risk registers, and reporting frameworks.


4 ways to safeguard CISO communications from legal liabilities

“Words matter incredibly in any legal proceeding,” Brown agreed. “The first thing that will happen will be discovery. And in discovery, they will collect all emails, all Teams, all Slacks, all communication mechanisms, and then run queries against that information.” Speaking with professionalism is not only a good practice in building an effective cybersecurity program, but it can go a long way to warding off legal and regulatory repercussions, according to Scott Jones, senior counsel at Johnson & Johnson. “The seriousness and the impact of your words and all other aspects of how you conduct yourself as a security professional can have impacts not only on substantive cybersecurity, but also what harms might befall your company either through an enforcement action or litigation,” he said. ... CISOs also need to pay attention to what they say based on the medium in which they are communicating. Pay attention to “how we communicate, who we’re communicating with, what platforms we’re communicating on, and whether it’s oral or written,” Angela Mauceri, corporate director and assistant general counsel for cyber and privacy at Northrop Grumman, said at RSA. “There’s a lasting effect to written communications.” She added, “To that point, you need to understand the data governance and, more importantly, the data retention policy of those electronic communication platforms, whether it exists for 60 days, 90 days, or six months.”

Daily Tech Digest - March 01, 2025


Quote for the day:

"Your life does not get better by chance, it gets better by change." -- Jim Rohn


Two AI developer strategies: Hire engineers or let AI do the work

Philip Walsh, director analyst in Gartner’s software engineering practice, said that from his vantage point he sees “two contrasting signals: some leaders, like Marc Benioff at Salesforce, suggest they may not need as many engineers due to AI’s impact, while others — Alibaba being a prime example — are actively scaling their technical teams and specifically hiring for AI-oriented roles.” In practice, he said, Gartner believes AI is far more likely to expand the need for software engineering talent. “AI adoption in software development is early and uneven,” he said, “and most large enterprises are still early in deploying AI for software development — especially beyond pilots or small-scale trials.” Walsh noted that, while there is a lot of interest in AI-based coding assistants (Gartner sees roughly 80% of large enterprises piloting or deploying them), actual active usage among developers is often much lower. “Many organizations report usage rates of 30% or less among those who have access to these tools,” he said, adding that the most common tools are not yet generating sufficient productivity gains to generate cost savings or headcount reductions. He said, “current solutions often require strong human supervision to avoid errors or endless loops. Even as these technologies mature over the next two to three years, human expertise will remain critical.”


The Great AI shift: The rise of ‘services as software’

Today, AI is pushing the envelope by turning services built to be used by humans as ‘self-serve’ utilities into automatically-running software solutions that execute autonomously—a paradigm shift the venture capital world, in particular, has termed ‘Services as Software’ ... The shift is already conspicuous across industries. AI tools like Harvey AI are transforming the legal and compliance sector by analysing case law and generating legal briefs, essentially replacing human research assistants. The customer support ecosystem that once required large human teams in call centres now handles significant query volumes daily with AI chatbots and virtual agents. ... The AI-driven shift brings into question the traditional notion of availing an ‘expert service’. Software development,legal, and financial services are all coveted industries where workers are considered ‘experts’ delivering specialised services. The human role will undergo tremendous redefinition and will require calibrated re-skilling. ... Businesses won't simply replace SaaS with AI-powered tools; they will build the company's processes and systems around these new systems. Instead of hiring marketing agencies, companies will use AI to generate dynamic marketing and advertising campaigns. Businesses will rely on AI-driven quality assurance and control instead of outsourcing software testing, Quality Assurance, and Quality Control.


Resilience, Observability and Unintended Consequences of Automation

Instead of thinking of replacing work that humans might make or do, it's augmenting that work. And how do we make it easier for us to do these kinds of jobs? And that might be writing code, that might be deploying it, that might be tackling incidents when they come up, but understanding what the fancy, nerdy academic jargon for this is joint cognitive systems. But thinking instead of replacement or our functional allocation, another good nerdy academic term, we'll give you this piece, we'll give the humans those pieces. How do we have a joint system where that automation is really supporting the work of the humans in this complex system? And in particular, how do you allow them to troubleshoot that, to introspect that, to actually understand and to have even maybe the very nerdy versions of this research lay out possible ways of thinking about what can these computers do to help us? ... We could go monolith to microservices, we could go pick your digital transformation. How long did that take you? And how much care did you put into that? Maybe some of it was too long or too bureaucratic or what have you, but I would argue that we tend to YOLO internal developer technology way faster and way looser than we do with the things that actually make us money as that is the perception, the things that actually make us money.


The Modern CDN Means Complex Decisions for Developers

“Developers should not have to be experts on how to scale an application; that should just be automatic. But equally, they should not have to be experts on where to serve an application to stay compliant with all these different patchworks of requirements; that should be more or less automatic,” Engates argues. “You should be able to flip a few switches and say ‘I need to be XYZ compliant in these countries,’ and the policy should then flow across that network and orchestrate where traffic is encrypted and where it’s served and where it’s delivered and what constraints are around it.” ... Along with the physical constraint of the speed of light and the rise of data protection and compliance regimes, Alexander also highlights the challenge of costs as something developers want modern CDNs to help them with. “Egress fees between clouds are one of the artificial barriers put in place,” he claims. That can be 10%, 20% or even 30% of overall cloud spend. “People can’t build the application that they want, they can’t optimize, because of some of these taxes that are added on moving data around.” Update patterns aren’t always straightforward either. Take a wiki like Fandom, where Fastly founder and CTO Artur Bergman was previously CTO. 


A Comprehensive Look at OSINT

Cybersecurity professionals within corporations rely on public data to identify emerging phishing campaigns, data breaches, or malicious activity targeting their brand. Investigative journalists and academic researchers turn to OSINT for fact-checking, identifying new leads, and gathering reliable support for their reporting or studies. ... Avoiding OSINT or downplaying its value can leave organizations unaware of threats and opportunities that are readily discoverable to others. By failing to gather open-source data, businesses and government agencies could remain in the dark about malicious activities, negative brand impersonations, or stolen credentials circulating on forums and dark web marketplaces. In the event of a security breach or public scandal, stakeholders may view the lack of proper OSINT measures as a failure of due diligence, eroding trust and tarnishing the organization’s image. ... The primary driver behind OSINT’s growth is the vast reservoir of information generated daily by digital platforms, databases, and news outlets. This public data can be invaluable for enhancing security, improving transparency, and making more informed decisions. Security professionals, for instance, can preemptively identify threats and vulnerabilities posted openly by malicious actors. 


OT/ICS cyber threats escalate as geopolitical conflicts intensify

A persistent lack of visibility into OT environments continues to obscure the full scale of these attacks. These insights come from Dragos’ 2025 OT/ICS Cybersecurity Report, its eighth annual Year in Review, which analyzes industrial organizations’ cyber threats. .., VOLTZITE is arguably the most crucial threat group to track in critical infrastructure. Due to its dedicated focus on OT data, the group is a capable threat to ICS asset owners and operators. This group shares extensive technical overlaps with the Volt Typhoon threat group tracked by other organizations. It utilizes the same techniques as in previous years, setting up complex chains of network infrastructure to target, compromise, and steal compromising OT-relevant data—GIS data, OT network diagrams, OT operating instructions, etc.—from victim ICS organizations. ... Increasing collaboration between hacktivist groups and state-backed cyber actors has led to a hybrid threat model where hacktivists amplify state objectives, either directly or through shared infrastructure and intelligence. State actors increasingly look to exploit hacktivist groups as proxies to conduct deniable cyber operations, allowing for more aggressive attacks with reduced attribution risks.


Leveraging AR & VR for Remote Maintenance in Industrial IoT

AR tools like Microsoft’s HoloLens 2 are enabling workers on-site to receive real-time guidance from experts located anywhere in the world. Using AR glasses or headsets, on-site personnel can share their view with remote technicians, who can then overlay instructions, schematics, or step-by-step troubleshooting guidance directly onto the worker’s field of vision. This allows maintenance teams to resolve issues faster and more accurately, without the need for travel, reducing downtime and operational costs. ... By using VR simulations, workers can familiarize themselves with equipment, troubleshoot issues, and practice responses to emergencies, all in a virtual setting. This hands-on experience builds confidence and competence, ultimately improving safety and efficiency when dealing with real equipment. As IIoT systems become more sophisticated, VR training can play a key role in ensuring that the workforce is well-prepared to handle advanced technologies without risking costly mistakes or accidents. ... In the future, we can expect even more seamless integration between AR/VR systems and IIoT platforms, where real-time data from sensors and machines is directly fed into the AR/VR environment, providing a comprehensive view of machine health, performance and issues. 


Just as DNA defines an organism’s identity, business continuity must be deeply embedded in every aspect of your organization. It is more than just a collection of emergency plans or procedures; it embodies a philosophy that ensures not only survival during disruptions, but long-term sustainability as well. ... An organization without continuity is like a tree without roots—fragile and vulnerable to the slightest shock. Continuity serves as an anchor, allowing organizations to navigate crises while staying aligned with their strategic goals. Any organization that aims to grow and thrive must take a proactive approach to continuity. Continuity strategies and initiatives can be seen as the roots of a tree, natural extensions that provide stability and sustain growth. ... It is essential that both leaders and team members possess the experience and skills needed to execute their work effectively. ... Thoroughly assess your key vulnerabilities. This involves two primary methods: a BIA, which analyzes the impacts of a disturbance over time to determine recovery priorities, resource requirements, and appropriate responses; and risk analysis, which identifies risks tied to prioritized activities and critical resources. Together, these two approaches offer a comprehensive understanding of your organization’s pain points.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

This phenomenon, a “compound physical-cyber threat,” where a cyberattack is intentionally launched around a heatwave or hurricane, for example, would have outsized and potentially devastating effects on businesses, communities, and entire economies, according to a 2024 study led by researchers at Johns Hopkins University. “Cyber-attacks are more disruptive when infrastructure components face stresses beyond normal operating conditions,” the study asserted. Businesses and their IT and risk management people would be wise to take notice, because both cyberattacks and weather-related disasters are increasing in frequency and in the cost they exact from their victims. ... Take what you learn from the risk assessment to develop a detailed plan that outlines the steps your organization intends to take to preserve cybersecurity, business continuity, and network connectivity during a crisis. Whether you’re a B2B or B2C organization, your customers, employees, suppliers and other stakeholders expect your business to be “always on,” 24/7/365. How will you keep the lights on, the lines of communications open, and your network insulated from cyberattack during a disaster? 


‘It Won’t Happen to Us:’ The Dangerous Mindset Minimizing Crisis Preparation

The main mistakes in crisis situations include companies staying silent and not releasing official statements from management, creating a vacuum of information and promoting the spread of rumors. ... First and foremost, companies should not underestimate the importance of communication, especially when things are not going well. During a crisis, many companies prefer to sit quietly and wait without informing or sharing anything about their measures and actions in connection with the crisis. This is the wrong approach. Silence gives competitors enough space to thrive and gain a market advantage. Meanwhile, journalists won’t stop working on hot stories. When you don’t share anything meaningful with them or your audience, they may collect and publish rumors and misinformation about your company. And the lack of comments creates the ground for negative interpretations. Therefore, transparency and efficiency are key principles of anti-crisis communication. If you are clear in your messages and give quick responses, it allows the company to control the information agenda. The surefire way to gain and maintain trust is to promptly and regularly inform your company’s investors during a crisis through your own channels. 

Daily Tech Digest - November 01, 2024

How CISOs can turn around low-performing cyber pros

When facing difficulties in both their professional and personal lives, people can start to withdraw and be less interested in contributing, even doing the bare minimum. They might also make mistakes more often or miss deadlines, or they can care less about how their colleagues or managers perceive their work. Body language can also provide insight into an employee’s emotional state and engagement level. When assigning tasks, Michelle Duval, founder and CEO at Marlee, a collaboration and performance AI for the workplace, looks her colleagues in the eyes. “Avoiding eye contact or visible sighing… are helpful clues,” she says. ... When it comes to helping employees improve their performance, the key point is to understand why they have problems in the first place and act quickly. “The best coaching depends on what type of problem you’re fixing,” says Caroline Ceniza-Levine, executive recruiter and career coach. “If the employee’s work product is suffering, they may need more direction or skills training. If the employee is disengaged, they may need help getting motivated – in this case, giving them more information around why their work matters and how important their contribution is may help.”


AI in Finserv: Predictive Analytics to Inclusive Banking

AI’s ability to synthesise vast amounts of data allows organisations to connect data from previously disparate sources, and then analyse it to detect historical patterns and deliver forward-looking insights. In the banking industry, this is happening at both a high level through traditional data analysis, and, increasingly, through more advanced AI tools including Natural Language Processing (NLP) and Machine Learning (ML). As organisations continue gathering these predictive analytics, many are also in the process of providing feedback to their AI systems which will ultimately improve their predictive accuracy over time. The main use case in which banks are currently seeing the biggest impact from AI-powered predictive insights is in forecasting consumer behaviour. ... AI-powered fraud detection algorithms can analyse vast amounts of transaction data in real-time at a scale that’s unattainable by humans. The real-time nature of these systems also allows organisations to prevent loss by intercepting anomalous transactions before they’re settled. This scalable, automatic approach also makes it easier for financial organisations to stay in compliance with relevant anti-money laundering (AML) and anti-terrorist financing regulations and avoid steep penalties.


Critical Software Must Drop C/C++ by 2026 or Face Risk

The federal government is heightening its warnings about dangerous software development practices, with the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) issuing stark warnings about basic security failures that continue to plague critical infrastructure. ... The report also states that the memory safety roadmap should outline the manufacturer’s prioritized approach to eliminating memory safety vulnerabilities in priority code components. “Manufacturers should demonstrate that the memory safety roadmap will lead to a significant, prioritized reduction of memory safety vulnerabilities in the manufacturer’s products and demonstrate they are making a reasonable effort to follow the memory safety roadmap,” the report said. “There are two good reasons why businesses continue to maintain COBOL and Fortran code at scale. Cost and risk,” Shimmin told The New Stack. “It’s simply not financially possible to port millions of lines of code, nor is it a risk any responsible organization would take.” ... Finally, it is good that CISA is recommending that companies with critical software in their care should create a stated plan of attack by early 2026, Shimmin said.


Into the Wild: Using Public Data for Cyber Risk Hunting

Threat hunting, on the contrary, is a proactive approach. It means that cyber teams go out into the wild and proactively identify potential risks and threat patterns, isolating them before they can cause any harm. A threat-hunting team requires specific knowledge and skills. Therefore, it usually consists of various professionals, such as threat analysts, who analyze available data to understand and predict the attacker's behavior; incident responders, who are ready to reduce the impact of a security incident; and cybersecurity engineers, responsible for building a secure network solution capable of protecting the network from advanced threats. These teams are trained to understand their company's IT environment, gather and analyze relevant data, and identify potential threats. Moreover, they have a clear risk escalation and communication process, which helps effectively react to threats and mitigate risks. Specialists often use a combination of tools that help in threat hunting. ... Endpoint detection and response (EDR) systems combine continuous real-time monitoring and collection of end-point data with a rule-based automated response.


How to Keep IT Up and Running During a Disaster

Using IoT sensing technology can provide early warning of disaster events and keep an eye on equipment if human access to facilities is cut off. Sensors and cameras can be helpful in determining when it may be appropriate to switch operations to other facilities or back up servers. Moisture sensors, for example, can detect whether floods may be on the verge of impacting device performance. ... In disaster-prone regions, it is advisable to proactively facilitate relationships with government authorities and emergency response agencies. This can be helpful both in ensuring continued compliance and assistance in the event of a natural disaster. “There are certain aspects of [disaster response] that need to be captured,” Miller says. “A lot of times in crisis mode, that becomes a secondary focus. But [disaster management] systems allow the tracking and the recording of that information.” Being aware of deadlines for compliance reporting and being in contact with regulators if they might be missed can save money on potential fines and penalties. And notifying emergency response agencies may result in prioritization of assistance given the economic imperatives of IT continuity.


Breaking Down Data Silos With Real-Time Streaming

Traditional "extract, transform, load" and "extract, load, transform" data pipelines have historically been the primary method for moving data into analytics. But analytics consumers have often had limited control or influence over the source data model, which is typically defined by application developers in the operational domain. Data is also often stale and outdated by the time it arrives for processing. "By shifting data processing and governance, organizations can eliminate redundant pipelines, reduce the risk and impact of bad data at its source, and leverage high-quality, continuously up-to-date data assets for both operational and analytical purposes," LaForest said. Real-time data streaming is especially crucial in sectors such as finance, e-commerce and logistics, where even a few seconds of delay can negatively impact customer satisfaction and profitability. ... Real-time data streaming is emerging as the foundation for the next wave of AI innovation. For predictive AI and pattern recognition, data needs to be available in real time to drive accurate, immediate insights. Real-time data pipelines are essential for enabling AI systems to deliver smarter, faster insights and drive more accurate decision-making across the enterprise.


Is now the right time to invest in implementing agentic AI?

What makes agentic AI autonomous or able to take actions independently is its ability to interpret data, predict outcomes, and make decisions, learning from new data — unlike traditional RPA, which falters when encountering unexpected data, said Cameron Marsh, senior analyst at Nucleus research. This adaptive nature of agentic AI, according to Chada, can help enterprises increase efficiency by handling complex, variable tasks that traditional RPA can’t manage, such as the roles of a claims adjuster, a loan officer, or a case worker, provided that it has access to the necessary data, workflows, and tools required to complete the task. ... Some platform vendors are already offering low-code and no-code agent development and management platforms, but these are limited in their functionality to building simple agents or modifying templates for agents built by the vendors themselves, analysts said. “Creating more complex agents, specifically ones that require customized integrations and nuanced decision-making abilities still demands some technical understanding of data flows, machine learning model tuning, and API integrations,” Futurum’s Hinchcliffe said, adding that there is a learning curve on these platforms and that the migration journey could be resource intensive.


How open-source MDM solutions simplify cross-platform device management

Few MDM solutions effectively address the challenge of device diversity, as most are designed to manage specific hardware or software platforms. This limitation forces businesses to juggle multiple solutions to cover their entire device ecosystem. Open-source MDM solutions, however, offer flexible, modular architectures that adapt to various operating systems and device types. Open standards and extensible APIs ensure cross-platform compatibility, from mobile devices to servers to IoT endpoints. Unified management interfaces abstract platform complexities, providing consistent administration across diverse devices, while collaboration with open-source communities broadens device support. These approaches simplify management for IT teams in heterogeneous environments, reducing the need for multiple specialized solutions. ... An effective MDM solution enhances device management in remote locations by enabling developers and administrators to create lightweight agents for low-bandwidth environments and implement platform-agnostic policies for diverse ecosystems. With custom scripts and modular components, businesses can tailor management workflows to align with specific operational demands, ensuring seamless integration across various environments. 


4 Essential Strategies for Enhancing Your Application Security Posture

Whatever the cause, the torrent of false positives wastes time, lowers security team morale, and obscures real threats. As a result, risks of a major oversight increase, and response time to actual threats slows, leading to undetected breaches, data loss, financial damage, and erosion of customer trust. ... To successfully implement shifting left, AppSec must deliver solutions that eliminate the burden of manual security tasks. The ASPM strategy is to integrate tools directly into the development environment to make security checks a seamless part of the development workflow. Such integrations would provide real-time feedback and actionable security guidance, minimizing disruptions and significantly enhancing productivity. ... One of the biggest challenges in AppSec today is tool sprawl. The wide array of tools promising to plug different security gaps burdens security teams with a complex security ecosystem that locks critical data into tool-specific silos. This data fragmentation makes it impossible for security teams to gain a holistic view of the security environment, leading to confusion and missed vulnerabilities when insights from one tool don’t correlate with insights from another.


How a classical computer beat a quantum computer at its own game

Confinement is a phenomenon that can arise under special circumstances in closed quantum systems and is analogous to the quark confinement known in particle physics. To understand confinement, let's begin with some quantum basics. On quantum scales, an individual magnet can be oriented up or down, or it can be in a "superposition"—a quantum state in which it points both up and down simultaneously. How up or down the magnet is affects how much energy it has when it's in a magnetic field. ... Serendipitously, IBM had, in their initial test, set up a problem where the organization of the magnets in a closed two-dimensional array led to confinement. Tindall and Sels realized that since the confinement of the system reduced the amount of entanglement, it kept the problem simple enough to be described by classical methods. Using simulations and mathematical calculations, Tindall and Sels came up with a simple, accurate mathematical model that describes this behavior. "One of the big open questions in quantum physics is understanding when entanglement grows rapidly and when it doesn't," Tindall says. 



Quote for the day:

"The meaning of life is to find your gift. The purpose of life is to give it away." -- Anonymous

Daily Tech Digest - November 04, 2023

AI’s Role In Payments 3.0: Balancing Innovation With Responsibility

At the core of the responsible use of AI is protecting individuals and their data. After all, one of the most personal things to people is their financial information. It’s critical for businesses to work with data experts who know how to keep customers’ private information safe while appropriately using payment data to enable personalized service. ... One of AI’s greatest advantages is its ability to scan and analyze large amounts of data and suggest or implement improved experiences related to payments. One could argue that AI might be able to handle these tasks better than humans, not only because it can do so quickly, but because it eliminates the biases humans can impose. However, is that true in practice? Unfortunately, not always. Responsible AI depends upon first choosing the most accurate, audited and unbiased data sources available. Then it must use a system of audits during the development and implementation of the machine learning model—and frequently thereafter—to detect and correct for inappropriate biased decision-making. Another compelling reason to weed out AI bias is compliance.


Decoding Kafka: Why It’s Worth the Complexity

First off, learning Kafka requires time and dedication. Newcomers might take a few days or weeks to grasp the basics and months to master advanced features and concepts. In addition, you need to constantly monitor and learn from the cluster’s performance as well as keep up with Kafka’s evolution and new features being released. Setting up your Kafka deployment can be challenging, expensive and time-consuming. This process can take anywhere between a few days and a few weeks, depending on the scale and the specifics of the setup. You may even decide that a dedicated platform team will need to be created specifically to manage Kafka. ... Kafka is more than a simple message broker. It offers additional capabilities like stream processing, durability, flexible messaging semantics and better scalability and performance than traditional brokers. While its superior characteristics increase complexity, the trade-off seems justified. Otherwise, why would numerous companies worldwide use Kafka? 


The Tech Gold Rush Is Over. The Search for the Next Gold Rush Is On

The tech industry will probably shrink, too. It will still be an important part of the economy, but employing fewer people and offering more normal returns. Now the question is where young people seeking wealth will turn next. From the Age of Discovery to the actual California Gold Rush to Snapchat, it is human nature to chase fortune where you can. It is a large part of what moves an economy forward. But these modern settings for this quest — the finance industry and the tech industry — are unusual in that they attracted many people who expected to get rich even if they lacked two things normally associated with extreme wealth creation: creating lots of value for the economy, and taking smart risks. The fact that anyone thinks things should be different is simply the result of the historically low interest rates of the last few decades, which helped make capital incredibly cheap. The price of capital is the price of risk, and if it is effectively zero, then it stands to reason that easy fortunes can be made risk-free.


To Improve Cyber Defenses, Practice for Disaster

The primary challenge organizations face when executing crisis simulations is determining the right level of difficulty, says Tanner Howell, director of solutions engineering at RangeForce. "With threat actors ranging from script kiddies to nation-states, it's vital to strike a balance of difficulty and relevance," he says. "If the simulation is too simple, it won't effectively test the playbooks. Too difficult, and team engagement may decrease." Walters says organizations should expand simulations beyond technical aspects to include regulatory compliance, public relations strategies, customer communications, and other critical areas. "These measures will help ensure that crisis simulations are comprehensive and better prepare the organization for a wide range of cybersecurity scenarios," he notes. Taavi Must, CEO of RangeForce, says organizations can implement some key best practices to improve team collaboration, readiness, and defensive posture. "Managers can perform business analysis to identify the most applicable threats to the organization," he says. "This allows teams to focus their already precious time around what matters most to them."


Going All-in With Evergreen Cloud Adoption Brings Its Own Challenges

An evergreen IT strategy on the other hand, keeps a cloud-based system online throughout to allow businesses to conduct smaller, more regular updates either weekly, biweekly, or monthly. These can be planned at optimum times to avoid downtime, support business continuity, and mitigate against lost profits. Disruption during transformation is always a risk, so incremental changes also mean pockets of disruption can be easier to isolate and resolve. An MSP expert has the time and knowledge necessary to craft a bespoke strategy for transformation, which builds-in operational resilience and offsets potential downtime. For instance, round-the-clock help desks mean businesses can access support and advice whenever they need it, to allow the fastest resolution of problems during the onboarding process. ... MSPs don’t just offer support during implementation, they have a wealth of systems knowledge that businesses can exploit to assist with automation updates, faults, and internal process reviews. For industries such as the manufacturing sector, downtime can account for 5% – 20% of working time, with lost productivity costing up to £180 billion a year. 


Ongoing supply chain disruption continues to take its toll

Firstly, there appears to have been little sign of easing on some supply chains, with 86 percent of respondents stating that they had experienced supply chain volatility over the past year, slightly down on the same level who reported the same in winter 2022 but similar to the levels who reported the same one year ago. Once again, the professionals responsible for delivering new data center facilities to the market have been significantly impacted by the volatility in the supply chain. Our survey of developer/investor respondents revealed that some 91 percent of them confirmed being significantly affected. This figure represents an increase from the 82 percent reported six months ago and 83 percent recorded a year ago. Notably, among our DEC stakeholders, the impact was more pronounced, with 93 percent expressing their strong agreement, compared to 70 percent in Q4 2022. Amongst our service providers, there is still a high level of agreement regarding this disruption albeit with some marginal easing; 92 percent stated that they had experienced such supply chain problems, down from 97 percent reported six months ago.


What Does a Data Scientist Do?

As the field of Data Science continues to evolve, so too does its potential to transform industries and revolutionize decision-making processes. While descriptive and predictive analytics have long been utilized to gain insights from historical data and make informed predictions about future outcomes, the current emphasis is on prescriptive analytics. Prescriptive analytics takes data analysis to a whole new level by not only providing insights into what might happen but also offering recommendations on how to make it happen. By leveraging advanced algorithms, machine learning techniques, and artificial intelligence, data scientists can now go beyond simply understanding patterns and trends. They can provide actionable suggestions that optimize decision-making processes. The impact of prescriptive analytics is far-reaching across numerous sectors. For example, in healthcare, it can help physicians determine personalized treatment plans based on patient data. In supply chain management, it can optimize inventory levels and streamline logistics operations. In finance, it can assist with risk management strategies.


From automated to autonomous, will the real robots please stand up?

Today's robots, despite not being as versatile as Mr. Data, are generally quite useful and functional. These include industrial robots, medical robots, military and defense robots, domestic robots, entertainment robots, space exploration and maintenance robots, agricultural robots, retail robots, underwater robots, and telepresence robots that help people participate in an activity from a distance. My personal interest has been focused on robots available and accessible to makers and hobbyists, robots that can empower individuals to build, design, and prototype projects previously only feasible by those with a shop full of fabrication machinery. I'm talking about 3D printers, which build up objects from layers of molten plastic; CNC devices, which often cut, carve, and remove wood or metal to create objects; laser cutters, which are ideal for sign cutting, engraving, and fabricating very detailed parts and circuit boards; and even vinyl cutters, for carefully cutting light, flexible material in intricate patterns. These machines are programmed using CAD software to define -- aka, design -- the object being built.


The Misleading Use of the ‘Technical Debt’ expression

The good software is the one that makes users happy and has the capacity to be improved when necessary. However some tech debts emerge from this need to evolve the product. I believe when the software does a good job in solving some problem, it probably needs to scale — as more people would use it, more features would be necessary and so on. When that happens, some characteristics of your software will need to change and something that usually works just fine will become a debt — If you don’t care about it soon enough — your great software will be called a legacy one. Software and its architecture should be evolutive. It’s important to map tech opportunities because it can become a technical debt in future but we don’t need to apply all immediately if it doesn’t fit the current scenario. I’ve seen many managers saying ‘From now on let’s never do a feature with debts’ — that’s for me a very incomprehension of the reality in software development. You will need to make debts to achieve the delivery time and have success in software and your business, but also debts will always emerge from a change of scenario of your product. 


What is a digital twin? And why is everyone suddenly building one?

Digital twins of vehicles, factories, and cities already exist, but could there ever be a digital twin of you? It’s very possible, especially as health technology, biosensors, and AI’s predictive ability improves. It’s easy to envision a day when every person has a digital health twin that mirrors our physical and genetic makeup and lifestyle, a doppelgänger we can feed hypothetical inputs to and see what effects they may have on our bodies. For example, a digital health twin could show us how adhering to a vegetarian diet would impact our specific body over the next 20 years. Or our digital health twin might be able to reveal the effect that physical stresses on our body will have as we continue to age—what another 10 years of sitting at our desk at work will do to our spine, for example. A digital twin may even be able to show us which treatment option for a disease is better for us by revealing how our unique body would react if we chose to undertake one treatment plan instead of another, say medication versus surgery. All this sounds far-fetched, but the digital twinning of organs and entire bodies is already well underway in the healthcare industry.



Quote for the day:

“Failure defeats losers, failure inspires winners.” -- Robert T. Kiyosaki