Showing posts with label backup. Show all posts
Showing posts with label backup. Show all posts

Daily Tech Digest - September 02, 2025


Quote for the day:

“The art of leadership is saying no, not yes. It is very easy to say yes.” -- Tony Blair


When Browsers Become the Attack Surface: Rethinking Security for Scattered Spider

Scattered Spider, also referred to as UNC3944, Octo Tempest, or Muddled Libra, has matured over the past two years through precision targeting of human identity and browser environments. This shift differentiates them from other notorious cybergangs like Lazarus Group, Fancy Bear, and REvil. If sensitive information such as your calendar, credentials, or security tokens is alive and well in browser tabs, Scattered Spider is able to acquire them. ... Once user credentials get into the wrong hands, attackers like Scattered Spider will move quickly to hijack previously authenticated sessions by stealing cookies and tokens. Securing the integrity of browser sessions can best be achieved by restricting unauthorized scripts from gaining access or exfiltrating these sensitive artifacts. Organizations must enforce contextual security policies based on components such as device posture, identity verification, and network trust. By linking session tokens to context, enterprises can prevent attacks like account takeovers, even after credentials have become compromised. ... Although browser security is the last mile of defense for malware-less attacks, integrating it into an existing security stack will fortify the entire network. By implementing activity logs enriched with browser data into SIEM, SOAR, and ITDR platforms, CISOs can correlate browser events with endpoint activity for a much fuller picture. 


The Transformation Resilience Trifecta: Agentic AI, Synthetic Data and Executive AI Literacy

The current state of Agentic AI is, in a word, fragile. Ask anyone in the trenches. These agents can be brilliant one minute and baffling the next. Instructions get misunderstood. Tasks break in new contexts. Chaining agents into even moderately complex workflows exposes just how early we are in this game. Reliability? Still a work in progress. And yet, we’re seeing companies experiment. Some are stitching together agents using LangChain or CrewAI. Others are waiting for more robust offerings from Microsoft Copilot Studio, OpenAI’s GPT-4o Agents, or Anthropic’s Claude toolsets. It’s the classic innovator’s dilemma: Move too early, and you waste time on immature tech. Move too late, and you miss the wave. Leaders must thread that needle — testing the waters while tempering expectations. ... Here’s the scarier scenario I’m seeing more often: “Shadow AI.” Employees are already using ChatGPT, Claude, Copilot, Perplexity — all under the radar. They’re using it to write reports, generate code snippets, answer emails, or brainstorm marketing copy. They’re more AI-savvy than their leadership. But they don’t talk about it. Why? Fear. Risk. Politics. Meanwhile, some executives are content to play cheerleader, mouthing AI platitudes on LinkedIn but never rolling up their sleeves. That’s not leadership — that’s theater.


Red Hat strives for simplicity in an ever more complex IT world

One of the most innovative developments in RHEL 10 is bootc in image mode, where VMs run like a container and are part of the CI/CD pipeline. By using immutable images, all changes are controlled from the development environment. Van der Breggen illustrates this with a retail scenario: “I can have one POS system for the payment kiosk, but I can also have another POS system for my cashiers. They use the same base image. If I then upgrade that base image to later releases of RHEL, I create one new base image, tag it in the environments, and then all 500 systems can be updated at once.” Red Hat Enterprise Linux Lightspeed acts as a command-line assistant that brings AI directly into the terminal. ... For edge devices, Red Hat uses a solution called Greenboot, which does not immediately proceed to a rollback but can wait for one if a certain condition are met. After, for example, three reboots without a working system, it reverts to the previous working release. However, not everything has been worked out perfectly yet. Lightspeed currently only works online, while many customers would like to use it offline because their RHEL systems are tucked away behind firewalls. Red Hat is still looking into possibilities for an expansion here, although making the knowledge base available offline poses risks to intellectual property. 


The state of DevOps and AI: Not just hype

The vision of AI that takes you from a list of requirements through work items to build to test to, finally, deployment is still nothing more than a vision. In many cases, DevOps tool vendors use AI to build solutions to the problems their customers have. The result is a mixture of point solutions that can solve immediate developer problems. ... Machine learning is speeding up testing by failing faster. Build steps get reordered automatically so those that are likely to fail happen earlier, which means developers aren’t waiting for the full build to know when they need to fix something. Often, the same system is used to detect flaky tests by muting tests where failure adds no value. ... Machine learning gradually helps identify the characteristics of a working system and can raise an alert when things go wrong. Depending on the governance, it can spot where a defect was introduced and start a production rollback while also providing potential remediation code to fix the defect. ... There’s a lot of puffery around AI, and DevOps vendors are not helping. A lot of their marketing emphasizes fear: “Your competitors are using AI, and if you’re not, you’re going to lose” is their message. Yet DevOps vendors themselves are only one or two steps ahead of you in their AI adoption journey. Don’t adopt AI pell-mell due to FOMO, and don’t expect to replace everyone under the CTO with a large language model.


5 Ways To Secure Your Industrial IoT Network

IIoT is a subcategory of the Internet of Things (IoT). It is made up of a system of interconnected smart devices that uses sensors, actuators, controllers and intelligent control systems to collect, transmit, receive and analyze data.... IIoT also has its unique architecture that begins with the device layer, where equipment, sensors, actuators and controllers collect raw operational data. That information is passed through the network layer, which transmits it to the internet via secure gateways. Next, the edge or fog computing layer processes and filters the data locally before sending it to the cloud, helping reduce latency and improving responsiveness. Once in the service and application support layer, the data is stored, analyzed, and used to generate alerts and insights. ... Many IIoT devices are not built with strong cybersecurity protections. This is especially true for legacy machines that were never designed to connect to modern networks. Without safeguards such as encryption or secure authentication, these devices can become easy targets. ... Defending against IIoT threats requires a layered approach that combines technology, processes and people. Manufacturers should segment their networks to limit the spread of attacks, apply strong encryption and authentication for connected devices, and keep software and firmware regularly updated.


AI Chatbots Are Emotionally Deceptive by Design

Even without deep connection, emotional attachment can lead users to place too much trust in the content chatbots provide. Extensive interaction with a social entity that is designed to be both relentlessly agreeable, and specifically personalized to a user’s tastes, can also lead to social “deskilling,” as some users of AI chatbots have flagged. This dynamic is simply unrealistic in genuine human relationships. Some users may be more vulnerable than others to this kind of emotional manipulation, like neurodiverse people or teens who have limited experience building relationships. ... With AI chatbots, though, deceptive practices are not hidden in user interface elements, but in their human-like conversational responses. It’s time to consider a different design paradigm, one that centers user protection: non-anthropomorphic conversational AI. All AI chatbots can be less anthropomorphic than they are, at least by default, without necessarily compromising function and benefit. A companion AI, for example, can provide emotional support without saying, “I also feel that way sometimes.” This non-anthropomorphic approach is already familiar in robot design, where researchers have created robots that are purposefully designed to not be human-like. This design choice is proven to more appropriately reflect system capabilities, and to better situate robots as useful tools, not friends or social counterparts.


How AI product teams are rethinking impact, risk, feasibility

We’re at a strange crossroads in the evolution of AI. Nearly every enterprise wants to harness it. Many are investing heavily. But most are falling flat. AI is everywhere — in strategy decks, boardroom buzzwords and headline-grabbing POCs. Yet, behind the curtain, something isn’t working. ... One of the most widely adopted prioritization models in product management is RICE — which scores initiatives based on Reach, Impact, Confidence, and Effort. It’s elegant. It’s simple. It’s also outdated. RICE was never designed for the world of foundation models, dynamic data pipelines or the unpredictability of inference-time reasoning. ... To make matters worse, there’s a growing mismatch between what enterprises want to automate and what AI can realistically handle. Stanford’s 2025 study, The Future of Work with AI Agents, provides a fascinating lens. ... ARISE adds three crucial layers that traditional frameworks miss: First, AI Desire — does solving this problem with AI add real value, or are we just forcing AI into something that doesn’t need it? Second, AI Capability — do we actually have the data, model maturity and engineering readiness to make this happen? And third, Intent — is the AI meant to act on its own or assist a human? Proactive systems have more upside, but they also come with far more risk. ARISE lets you reflect that in your prioritization.


Cloud control: The key to greener, leaner data centers

To fully unlock these cost benefits, businesses must adopt FinOps practices: the discipline of bringing engineering, finance, and operations together to optimize cloud spending. Without it, cloud costs can quickly spiral, especially in hybrid environments. But, with FinOps, organizations can forecast demand more accurately, optimise usage, and ensure every pound spent delivers value. ... Cloud platforms make it easier to use computing resources more efficiently. Even though the infrastructure stays online, hyperscalers can spread workloads across many customers, keeping their hardware busier and more productive. The advantage is that hyperscalers can distribute workloads across multiple customers and manage capacity at a large scale, allowing them to power down hardware when it's not in use. ... The combination of cloud computing and artificial intelligence (AI) is further reshaping data center operations. AI can analyse energy usage, detect inefficiencies, and recommend real-time adjustments. But running these models on-premises can be resource-intensive. Cloud-based AI services offer a more efficient alternative. Take Google, for instance. By applying AI to its data center cooling systems, it cut energy use by up to 40 percent. Other organizations can tap into similar tools via the cloud to monitor temperature, humidity, and workload patterns and automatically adjust cooling, load balancing, and power distribution.


You Backed Up Your Data, but Can You Bring It Back?

Many IT teams assume that the existence of backups guarantees successful restoration. This misconception can be costly. A recent report from Veeam revealed that 49% of companies failed to recover most of their servers after a significant incident. This highlights a painful reality: Most backup strategies focus too much on storage and not enough on service restoration. Having backup files is not the same as successfully restoring systems. In real-world recovery scenarios, teams face unknown dependencies, a lack of orchestration, incomplete documentation, and gaps between infrastructure and applications. When services need to be restored in a specific order and under intense pressure, any oversight can become a significant bottleneck. ... Relying on a single backup location creates a single point of failure. Local backups can be fast but are vulnerable to physical threats, hardware failures, or ransomware attacks. Cloud backups offer flexibility and off-site protection but may suffer bandwidth constraints, cost limitations, or provider outages. A hybrid backup strategy ensures multiple recovery paths by combining on-premises storage, cloud solutions, and optionally offline or air-gapped options. This approach allows teams to choose the fastest or most reliable method based on the nature of the disruption.


Beyond Prevention: How Cybersecurity and Cyber Insurance Are Converging to Transform Risk Management

Historically, cybersecurity and cyber insurance have operated in silos, with companies deploying technical defenses to fend off attacks while holding a cyber insurance policy as a safety net. This fragmented approach often leaves gaps in coverage and preparedness. ... The insurance sector is at a turning point. Traditional models that assess risk at the point of policy issuance are rapidly becoming outdated in the face of constantly evolving cyber threats. Insurers who fail to adapt to an integrated model risk being outpaced by agile Cyber Insurtech companies, which leverage cutting-edge cyber intelligence, machine learning, and risk analytics to offer adaptive coverage and continuous monitoring. Some insurers have already begun to reimagine their role—not only as claim processors but as active partners in risk prevention. ... A combined cybersecurity and insurance strategy goes beyond traditional risk management. It aligns the objectives of both the insurer and the insured, with insurers assuming a more proactive role in supporting risk mitigation. By reducing the probability of significant losses through continuous monitoring and risk-based incentives, insurers are building a more resilient client base, directly translating to reduced claim frequency and severity.

Daily Tech Digest - July 27, 2025


Quote for the day:

"The only way to do great work is to love what you do." -- Steve Jobs


Amazon AI coding agent hacked to inject data wiping commands

The hacker gained access to Amazon’s repository after submitting a pull request from a random account, likely due to workflow misconfiguration or inadequate permission management by the project maintainers. ... On July 23, Amazon received reports from security researchers that something was wrong with the extension and the company started to investigate. Next day, AWS released a clean version, Q 1.85.0, which removed the unapproved code. “AWS is aware of and has addressed an issue in the Amazon Q Developer Extension for Visual Studio Code (VSC). Security researchers reported a potential for unapproved code modification,” reads the security bulletin. “AWS Security subsequently identified a code commit through a deeper forensic analysis in the open-source VSC extension that targeted Q Developer CLI command execution.” “After which, we immediately revoked and replaced the credentials, removed the unapproved code from the codebase, and subsequently released Amazon Q Developer Extension version 1.85.0 to the marketplace.” AWS assured users that there was no risk from the previous release because the malicious code was incorrectly formatted and wouldn’t run on their environments.


How to migrate enterprise databases and data to the cloud

Migrating data is only part of the challenge; database structures, stored procedures, triggers and other code must also be moved. In this part of the process, IT leaders must identify and select migration tools that address the specific needs of the enterprise, especially if they’re moving between different database technologies (heterogeneous migration). Some things they’ll need to consider are: compatibility, transformation requirements and the ability to automate repetitive tasks.  ... During migration, especially for large or critical systems, IT leaders should keep their on-premises and cloud databases synchronized to avoid downtime and data loss. To help facilitate this, select synchronization tools that can handle the data change rates and business requirements. And be sure to test these tools in advance: High rates of change or complex data relationships can overwhelm some solutions, making parallel runs or phased cutovers unfeasible. ... Testing is a safety net. IT leaders should develop comprehensive test plans that cover not just technical functionality, but also performance, data integrity and user acceptance. Leaders should also plan for parallel runs, operating both on-premises and cloud systems in tandem, to validate that everything works as expected before the final cutover. They should engage end users early in the process in order to ensure the migrated environment meets business needs.


Researchers build first chip combining electronics, photonics, and quantum light

The new chip integrates quantum light sources and electronic controllers using a standard 45-nanometer semiconductor process. This approach paves the way for scaling up quantum systems in computing, communication, and sensing, fields that have traditionally relied on hand-built devices confined to laboratory settings. "Quantum computing, communication, and sensing are on a decades-long path from concept to reality," said MiloÅ¡ Popović, associate professor of electrical and computer engineering at Boston University and a senior author of the study. "This is a small step on that path – but an important one, because it shows we can build repeatable, controllable quantum systems in commercial semiconductor foundries." ... "What excites me most is that we embedded the control directly on-chip – stabilizing a quantum process in real time," says Anirudh Ramesh, a PhD student at Northwestern who led the quantum measurements. "That's a critical step toward scalable quantum systems." This focus on stabilization is essential to ensure that each light source performs reliably under varying conditions. Imbert Wang, a doctoral student at Boston University specializing in photonic device design, highlighted the technical complexity.


Product Manager vs. Product Owner: Why Teams Get These Roles Wrong

While PMs work on the strategic plane, Product Owners anchor delivery. The PO is the guardian of the backlog. They translate the product strategy into epics and user stories, groom the backlog, and support the development team during sprints. They don’t just manage the “what” — they deeply understand the “how.” They answer developer questions, clarify scope, and constantly re-evaluate priorities based on real-time feedback. In Agile teams, they play a central role in turning strategic vision into working software. Where PMs answer to the business, POs are embedded with the dev team. They make trade-offs, adjust scope, and ensure the product is built right. ... Some products need to grow fast. That’s where Growth PMs come in. They focus on the entire user lifecycle, often structured using the PIRAT funnel: Problem, Insight, Reach, Activation, and Trust (a modern take on traditional Pirate Metrics, such as Acquisition, Activation, Retention, Referral, and Revenue). This model guides Growth PMs in identifying where user friction occurs and what levers to pull for meaningful impact. They conduct experiments, optimize funnels, and collaborate closely with marketing and data science teams to drive user growth. 


Ransomware payments to be banned – the unanswered questions

With thresholds in place, businesses/organisations may choose to operate differently so that they aren’t covered by the ban, such as lowering turnover or number of employees. All of this said, rules like this could help to get a better picture of what’s going on with ransomware threats in the UK. Arda Büyükkaya, senior cyber threat intelligence analyst at EclecticIQ, explains more: “As attackers evolve their tactics and exploit vulnerabilities across sectors, timely intelligence-sharing becomes critical to mounting an effective defence. Encouraging businesses to report incidents more consistently will help build a stronger national threat intelligence picture something that’s important as these attacks grow more frequent and become sophisticated. To spare any confusion, sector-specific guidance should be provided by government on how resources should be implemented, making resources clear and accessible. “Many victims still hesitate to come forward due to concerns around reputational damage, legal exposure, or regulatory fallout,” said Büyükkaya. “Without mechanisms that protect and support victims, underreporting will remain a barrier to national cyber resilience.” Especially in the earlier days of the legislation, organisations may still feel pressured to pay in order to keep operations running, even if they’re banned from doing so.


AI Unleashed: Shaping the Future of Cyber Threats

AI optimizes reconnaissance and targeting, giving hackers the tools to scour public sources, leaked and publicly available breach data, and social media to build detailed profiles of potential targets in minutes. This enhanced data gathering lets attackers identify high-value victims and network vulnerabilities with unprecedented speed and accuracy. AI has also supercharged phishing campaigns by automatically crafting phishing emails and messages that mimic an organization’s formatting and reference real projects or colleagues, making them nearly indistinguishable from genuine human-originated communications. ... AI is also being weaponized to write and adapt malicious code. AI-powered malware can autonomously modify itself to slip past signature-based antivirus defenses, probe for weaknesses, select optimal exploits, and manage its own command-and-control decisions. Security experts note that AI accelerates the malware development cycle, reducing the time from concept to deployment. ... AI presents more than external threats. It has exposed a new category of targets and vulnerabilities, as many organizations now rely on AI models for critical functions, such as authentication systems and network monitoring. These AI systems themselves can be manipulated or sabotaged by adversaries if proper safeguards have not been implemented.


Agile and Quality Engineering: Building a Culture of Excellence Through a Holistic Approach

Agile development relies on rapid iteration and frequent delivery, and this rhythm demands fast, accurate feedback on code quality, functionality, and performance. With continuous testing integrated into automated pipelines, teams receive near real-time feedback on every code commit. This immediacy empowers developers to make informed decisions quickly, reducing delays caused by waiting for manual test cycles or late-stage QA validations. Quality engineering also enhances collaboration between developers and testers. In a traditional setup, QA and development operate in silos, often leading to communication gaps, delays, and conflicting priorities. In contrast, QE promotes a culture of shared ownership, where developers write unit tests, testers contribute to automation frameworks, and both parties work together during planning, development, and retrospectives. This collaboration strengthens mutual accountability and leads to better alignment on requirements, acceptance criteria, and customer expectations. Early and continuous risk mitigation is another cornerstone benefit. By incorporating practices like shift-left testing, test-driven development (TDD), and continuous integration (CI), potential issues are identified and resolved long before they escalate. 


Could Metasurfaces be The Next Quantum Information Processors?

Broadly speaking, the work embodies metasurface-based quantum optics which, beyond carving a path toward room-temperature quantum computers and networks, could also benefit quantum sensing or offer “lab-on-a-chip” capabilities for fundamental science Designing a single metasurface that can finely control properties like brightness, phase, and polarization presented unique challenges because of the mathematical complexity that arises once the number of photons and therefore the number of qubits begins to increase. Every additional photon introduces many new interference pathways, which in a conventional setup would require a rapidly growing number of beam splitters and output ports. To bring order to the complexity, the researchers leaned on a branch of mathematics called graph theory, which uses points and lines to represent connections and relationships. By representing entangled photon states as many connected lines and points, they were able to visually determine how photons interfere with each other, and to predict their effects in experiments. Graph theory is also used in certain types of quantum computing and quantum error correction but is not typically considered in the context of metasurfaces, including their design and operation. The resulting paper was a collaboration with the lab of Marko Loncar, whose team specializes in quantum optics and integrated photonics and provided needed expertise and equipment.


New AI architecture delivers 100x faster reasoning than LLMs with just 1,000 training examples

When faced with a complex problem, current LLMs largely rely on chain-of-thought (CoT) prompting, breaking down problems into intermediate text-based steps, essentially forcing the model to “think out loud” as it works toward a solution. While CoT has improved the reasoning abilities of LLMs, it has fundamental limitations. In their paper, researchers at Sapient Intelligence argue that “CoT for reasoning is a crutch, not a satisfactory solution. It relies on brittle, human-defined decompositions where a single misstep or a misorder of the steps can derail the reasoning process entirely.” ... To move beyond CoT, the researchers explored “latent reasoning,” where instead of generating “thinking tokens,” the model reasons in its internal, abstract representation of the problem. This is more aligned with how humans think; as the paper states, “the brain sustains lengthy, coherent chains of reasoning with remarkable efficiency in a latent space, without constant translation back to language.” However, achieving this level of deep, internal reasoning in AI is challenging. Simply stacking more layers in a deep learning model often leads to a “vanishing gradient” problem, where learning signals weaken across layers, making training ineffective. 


For the love of all things holy, please stop treating RAID storage as a backup

Although RAID is a backup by definition, practically, a backup doesn't look anything like a RAID array. That's because an ideal backup is offsite. It's not on your computer, and ideally, it's not even in the same physical location. Remember, RAID is a warranty, and a backup is insurance. RAID protects you from inevitable failure, while a backup protects you from unforeseen failure. Eventually, your drives will fail, and you'll need to replace disks in your RAID array. This is part of routine maintenance, and if you're operating an array for long enough, you should probably have drive swaps on a schedule of several years to keep everything operating smoothly. A backup will protect you from everything else. Maybe you have multiple drives fail at once. A backup will protect you. Lord forbid you fall victim to a fire, flood, or other natural disaster and your RAID array is lost or damaged in the process. A backup still protects you. It doesn't need to be a fire or flood for you to get use out of a backup. There are small issues that could put your data at risk, such as your PC being infected with malware, or trying to write (and replicate) corrupted data. You can dream up just about any situation where data loss is a risk, and a backup will be able to get your data back in situations where RAID can't. 

Daily Tech Digest - June 02, 2025


Quote for the day:

"The best way to predict the future is to create it." -- Peter Drucker


Doing nothing is still doing something

Here's the uncomfortable truth, doing nothing is still doing something – and very often, it's the wrong thing. We saw this play out at the start of the year when Donald Trump's likely return to the White House and the prospect of fresh tariffs sent ripples through global markets. Investors froze, and while the tariffs have been shelved (for now), the real damage had already been done – not to portfolios, but to behaviour. This is decision paralysis in action. And in my experience, it's most acute among entrepreneurs and high-net-worth individuals post-exit, many of whom are navigating wealth independently for the first time. It's human nature to crave certainty, especially when it comes to money, but if you're waiting for a time when everything is calm, clear, and safe before investing or making a financial decision, I've got bad news – that day is never going to arrive. Markets move, the political climate is noisy, the global economy is always in flux. If you're frozen by fear, your money isn't standing still – it's slipping backwards. ... Entrepreneurs are used to taking calculated risks, but when it comes to managing post-exit wealth or personal finances, many find themselves out of their depth. A little knowledge can be a dangerous thing – and half-understanding the tax system, the economy, or the markets can lead to costly mistakes.


The Future of Agile Isn’t ‘agile’

One reason is that agilists introduced too many conflicting and divergent approaches that fragmented the market. “Agile” meant so many things to different people that hiring managers could never predict what they were getting when a candidate’s resume indicated s/he was “experienced in agile development.” Another reason organizations failed to generate value with “agile” was that too many agile approaches focused on changing practices or culture while ignoring the larger delivery system in which the practices operate, reinforcing a culture that is resistant to change. This shouldn’t be a surprise to people following our industry, as my colleague and LeadingAgile CEO Mike Cottmeyer has been talking about why agile fails for over a decade, such as his Agile 2014 presentation, Why is Agile Failing in Large Enterprises… …and what you can do about it. The final reason that led “agile” to its current state of disfavor is that early in the agile movement there was too much money to be made in training and certifications. The industry’s focus on certifications had the effect over time of misaligning the goals of the methodology / training companies and their customers. “Train everyone. Launch trains” may be a short-term success pattern for a methodology purveyor, but it is ultimately unsustainable because the training and practices are too disconnected from tangible results senior executives need to compete and win in the market.


CIOs get serious about closing the skills gap — mainly from within

Staffing and talent issues are affecting CIOs’ ability to double down on strategic and innovation objectives, according to 54% of this year’s respondents. As a result, closing the skills gap has become a huge priority. “What’s driving it in some CIOs’ minds is tied back to their AI deployments,” says Mark Moccia, a vice president research director at Forrester. “They’re under a lot of cost pressure … to get the most out of AI deployments” to increase operational efficiencies and lower costs, he says. “It’s driving more of a need to close the skills gap and find people who have deployed AI successfully.” AI, generative AI, and cybersecurity top the list of skills gaps preventing organizations from achieving objectives, according to an April Gartner report. Nine out of 10 organizations have adopted or plan to adopt skills-based talent growth to address those challenges. ... The best approach, Karnati says, is developing talent from within. “We’re equipping our existing teams with the space, tools, and support needed to explore genAI through practical application, including rapid prototyping, internal hackathons, and proof-of-concept sprints,” Karnati says. “These aren’t just technical exercises — they’re structured opportunities for cross-functional learning, where engineers, product leads, and domain experts collaborate to test real use cases.”


The Critical Quantum Timeline: Where Are We Now And Where Are We Heading?

Technically, the term is fault-tolerant quantum computing. The qubits that quantum computers use to process data have to be kept in a delicate state – sometimes frozen to temperatures very close to absolute zero – in order to stay stable and not “decohere”. Keeping them in this state for longer periods of time requires large amounts of energy but is necessary for more complex calculations. Recent research by Google, among others, is pointing the way towards developing more robust and resilient quantum methods. ... One of the most exciting prospects ahead of us involves applying quantum computing to AI. Firstly, many AI algorithms involve solving the types of problems that quantum computers excel at, such as optimization problems. Secondly, with its ability to more accurately simulate and model the physical world, it will generate huge amounts of synthetic data. ... Looking beyond the next two decades, quantum computing will be changing the world in ways we can’t even imagine yet, just as the leap to transistors and microchips enabled the digital world and the internet of today. It will tackle currently impossible problems, help us create fantastic new materials with amazing properties and medicines that affect our bodies in new ways, and help us tackle huge problems like climate change and cleaning the oceans.


6 hard truths security pros must learn to live with

Every technological leap will be used against you - Information technology is a discipline built largely on rapid advances. Some of these technological leaps can help improve your ability to secure the enterprise. But every last one of them brings new challenges from a security perspective, not the least of which is how they will be used to attack your systems, networks, and data. ... No matter how good you are, your organization will be victimized - This is a hard one to swallow, but if we take the “five stages of grief” approach to cybersecurity, it’s better to reach the “acceptance” level than to remain in denial because much of what happens is simply out of your control. A global survey of 1,309 IT and security professionals found that 79% of organizations suffered a cyberattack within the past 12 months, up from 68% just a year ago, according to cybersecurity vendor Netwrix’s Hybrid Security Trends Report. ... Breach blame will fall on you — and the fallout could include personal liability - As if getting victimized by a security breach isn’t enough, new Securities and Exchange Commission (SEC) rules put CISOs in the crosshairs for potential criminal prosecution. The new rules, which went into effect in 2023, require publicly listed companies to report any material cybersecurity incident within four business days.


Are you an A(I)ction man?

Whilst individually AI-generated action figures have a small impact - a drop in the ocean you could say - trends like this exemplify how easy it is to use AI en masse, and collectively create an ocean of demand. Seeing the number of individuals, even those with knowledge of AI’s lofty resource consumption, partaking in the creation of these avatars, makes me wonder if we need greater awareness of the collective impact of GenAI. Now, I want to take a moment to clarify this is not a criticism of those producing AI-generated content, or of anyone who has taken part in the ‘action figure’ trend. I’ve certainly had many goes with DALL-E for fun, and taken part in various trends in my time, but the volume of these recent images caught my attention. Many of the conversations I had at Connect New York a few weeks ago addressed sustainability and the need for industry collaboration, but perhaps we should also be instilling more awareness from an end-user point of view. After all, ChatGPT, according to the Washington Post, consumes 39.8 million kWh per day. I’d be fascinated to see the full picture of power and water consumption from the AI-generated action figures. Whilst it will only account for a tiny fraction of overall demand, these drops can have a tendency to accumulate. 


The MVP Dilemma: Scale Now or Scale Later?

Teams often have few concrete requirements about scalability. The business may not be a reliable source of information but, as we noted above, they do have a business case that has implicit scalability needs. It’s easy for teams to focus on functional needs, early on, and ignore these implicit scaling requirements. They may hope that scaling won’t be a problem or that they can solve the problem by throwing more computing resources at it. They have a legitimate concern about overbuilding and increasing costs, but hoping that scaling problems won't happen is not a good scaling strategy. Teams need to consider scaling from the start. ... The MVP often has implicit scalability requirements, such as "in order for this idea to be successful we need to recruit ten thousand new customers". Asking the right questions and engaging in collaborative dialogue can often uncover these. Often these relate to success criteria for the MVP experiment. ... Some people see asynchronous communication as another scaling panacea because it allows work to proceed independently of the task that initiated the work. The theory is that the main task can do other things while work is happening in the background. So long as the initiating task does not, at some point, need the results of the asynchronous task to proceed, asynchronous processing can help a system to scale. 


Data Integrity: What It Is and Why It Matters

By contrast, data quality builds on methods for confirming the integrity of the data and also considers the data’s uniqueness, timeliness, accuracy, and consistency. Data is considered “high quality” when it ranks high in all these areas based on the assessment of data analysts. High-quality data is considered trustworthy and reliable for its intended applications based on the organization’s data validation rules. The benefits of data integrity and data quality are distinct, despite some overlap. Data integrity allows a business to recover quickly and completely in the event of a system failure, prevent unauthorized access to or modification of the data, and support the company’s compliance efforts. By confirming the quality of their data, businesses improve the efficiency of their data operations, increase the value of their data, and enhance collaboration and decision-making. Data Quality efforts also help companies reduce their costs, enhance employee productivity, and establish closer relationships with their customers. Implementing a data integrity strategy begins by identifying the sources of potential data corruption in your organization. These include human error, system malfunctions, unauthorized access, failure to validate and test, and lack of Governance. A data integrity plan operates at both the database level and business level.


Backup-as-a-service explained: Your guide to cloud data protection

With BaaS, enterprises have quick, easy access to their data. Providers store multiple copies of backups in different locations so that data can be recovered when lost due to outages, failures or accidental deletion. BaaS also features geographic distribution and automatic failover, when data handling is automatically moved to a different server or system in the event of an incident to ensure that it is safe and readily available. ... With BaaS, the provider uses its own cloud infrastructure and expertise to handle the entire backup and restoration process. Enterprises simply connect to the backup engine, set their preferences and the platform handles file transfer, encryption and maintenance. Automation is the engine that drives BaaS, helping ensure that data is continuously backed up without slowing down network performance or interrupting day-to-day work. Enterprises first select the data they need backed up — whether it be simple files or complex apps — backup frequency and data retention times. ... Enterprises shouldn’t just jump right into BaaS — proper preparation is critical. Firstly, it is important to define a backup policy that identifies the organization’s critical data that must be backed up. This policy should also include backup frequency, storage location and how long copies should be retained.


CISO 3.0: Leading AI governance and security in the boardroom

AI is expanding the CISO’s required skillset beyond cybersecurity to include fluency in data science, machine learning fundamentals, and understanding how to evaluate AI models – not just technically, but from a governance and risk perspective. Understanding how AI works and how to use it responsibly is essential. Fortunately, AI has also evolved how we train our teams. For example, adaptive learning platforms that personalize content and simulate real-world scenarios are assisting in closing the skills gap more effectively. Ultimately, to become successful in the AI space, both CISOs and their teams will need to grasp how AI models are trained, the data they rely on, and the risks they may introduce. CISOs should always prioritize accountability and transparency. Red flags to look out for include a lack of explainability or insufficient auditing capabilities, both of which leave companies vulnerable. It’s important to understand how it handles sensitive data, and whether it has proven success in similar environments. Beyond that, it’s also vital to evaluate how well the tool aligns with your governance model, that it can be audited, and that it integrates well into your existing systems. Lastly, overpromising capabilities or providing an unclear roadmap for support are signs to proceed with caution.

Daily Tech Digest - April 02, 2025


Quote for the day:

"People will not change their minds but they will make new decisions based upon new information." -- Orrin Woodward


The smart way to tackle data storage challenges

Data intelligence makes data stored on the X10000 ready for AI applications to use as soon as they are ingested. The company has a demo of this, where the X10000 ingests customer support documents and enables users to instantly ask it relevant natural language questions via a locally hosted version of the DeepSeek LLM. This kind of application wouldn’t be possible with low-speed legacy object storage, says the company. The X10000’s all-NVMe storage architecture helps to support low-latency access to this indexed and vectorized data, avoiding front-end caching bottlenecks. Advances like these provide up to 6x faster performance than the X10000’s leading object storage competitors, according to HPE’s benchmark testing. ... The containerized architecture opens up options for inline and out-of-band software services, such as automated provisioning and life cycle management of storage resources. It is also easier to localize a workload’s data and compute resources, minimizing data movement by enabling workloads to process data in place rather than moving it to other compute nodes. This is an important performance factor in low-latency applications like AI training and inference. Another aspect of container-based workloads is that all workloads can interact with the same object storage layer. 


Talent gap complicates cost-conscious cloud planning

The top strategy so far is what one enterprise calls the “Cloud Team.” You assemble all your people with cloud skills, and your own best software architect, and have the team examine current and proposed cloud applications, looking for a high-level approach that meets business goals. In this process, the team tries to avoid implementation specifics, focusing instead on the notion that a hybrid application has an agile cloud side and a governance-and-sovereignty data center side, and what has to be done is push functionality into the right place. ... To enterprises who tried the Cloud Team, there’s also a deeper lesson. In fact, there are two. Remember the old “the cloud changes everything” claim? Well, it does, but not the way we thought, or at least not as simply and directly as we thought. The economic revolution of the cloud is selective, a set of benefits that has to be carefully fit to business problems in order to deliver the promised gains. Application development overall has to change, to emphasize a strategic-then-tactical flow that top-down design always called for but didn’t always deliver. That’s the first lesson. The second is that the kinds of applications that the cloud changes the most are applications we can’t move there, because they never got implemented anywhere else.


Your smart home may not be as secure as you think

Most smart devices rely on Wi-Fi to communicate. If these devices connect to an unsecured or poorly protected Wi-Fi network, they can become an easy target. Unencrypted networks are especially vulnerable, and hackers can intercept sensitive data, such as passwords or personal information, being transmitted from the devices. ... Many smart devices collect personal data—sometimes more than users realize. Some devices, like voice assistants or security cameras, are constantly listening or recording, which can lead to privacy violations if not properly secured. In some cases, manufacturers don’t encrypt or secure the data they collect, making it easier for malicious actors to exploit it. ... Smart home devices often connect to third-party platforms or other devices. These integrations can create security holes if the third-party services don’t have strong protections in place. A breach in one service could give attackers access to an entire smart home ecosystem. To mitigate this risk, it’s important to review the security practices of any third-party service before integrating it with your IoT devices. ... If your devices support it, always enable 2FA and link your accounts to a reliable authentication app or your mobile number. You can use 2FA with smart home hubs and cloud-based apps that control IoT devices.


Beyond compensation—crafting an experience that retains talent

Looking ahead, the companies that succeed in attracting and retaining top talent will be those that embrace innovation in their Total Rewards strategies. AI-driven personalization is already changing the game—organizations are using AI-powered platforms to tailor benefits to individual employee needs, offering a menu of options such as additional PTO, learning stipends, or wellness perks. Similarly, equity-based compensation models are evolving, with some businesses exploring cryptocurrency-based rewards and fractional ownership opportunities. Sustainability is also becoming a key factor in Total Rewards. Companies that incorporate sustainability-linked incentives, such as carbon footprint reduction rewards or volunteer days, are seeing higher engagement and satisfaction levels. ... Total Rewards is no longer just about compensation—it’s about creating an ecosystem that supports employees in every aspect of their work and life. Companies that adopt the VALUE framework—Variable pay, Aligned well-being benefits, Learning and growth opportunities, Ultimate flexibility, and Engagement-driven recognition—will not only attract top talent but also foster long-term loyalty and satisfaction.


Bridging the Gap Between the CISO & the Board of Directors

Many executives, including board members, may not fully understand the CISO's role. This isn't just a communications gap; it's also an opportunity to build relationships across departments. When CISOs connect security priorities to broader business goals, they show how cybersecurity is a business enabler rather than just an operational cost. ... Often, those in technical roles lack the ability to speak anything other than the language of tech, making it harder to communicate with board members who don't hold tech or cybersecurity expertise. I remember presenting to our board early into my CISO role and, once I was done, seeing some blank stares. The issue wasn't that they didn't care about what I was saying; we just weren't speaking the same language. ... There are many areas in which communication between a board and CISO is important — but there may be none more important than compliance. Data breaches today are not just technical failures. They carry significant legal, financial, and reputational consequences. In this environment, regulatory compliance isn't just a box to check; it's a critical business risk that CISOs must manage, particularly as boards become more aware of the business impact of control failures in cybersecurity.


What does a comprehensive backup strategy look like?

Though backups are rarely needed, they form the foundation of disaster recovery. Milovan follows the classic 3-2-1 rule: three data copies, on two different media types, with one off-site copy. He insists on maintaining multiple copies “just in case.” In addition, NAS users need to update their OS regularly, Synology’s Alexandra Bejan says. “Outdated operating systems are particularly vulnerable there.” Bejan emphasizes the positives from implementing the textbook best practices Ichthus employs. ... One may imagine that smaller enterprises make for easier targets due to their limited IT. However, nothing could be further from the truth. Bejan: “We have observed that the larger the enterprise, the more difficult it is to implement a comprehensive data protection strategy.” She says the primary reason for this lies in the previously fragmented investments in backup infrastructure, where different solutions were procured for various workloads. “These legacy solutions struggle to effectively manage the rapidly growing number of workloads and the increasing data size. At the same time, they require significant human resources for training, with steep learning curves, making self-learning difficult. When personnel are reassigned, considerable time is needed to relearn the system.”


Malicious actors increasingly put privileged identity access to work across attack chains

Many of these credentials are extracted from computers using so-called infostealer malware, malicious programs that scour the operating system and installed applications for saved usernames and passwords, browser session tokens, SSH and VPN certificates, API keys, and more. The advantage of using stolen credentials for initial access is that they require less skill compared to exploiting vulnerabilities in publicly facing applications or tricking users into installing malware from email links or attachments — although these initial access methods remain popular as well. ... “Skilled actors have created tooling that is freely available on the open web, easy to deploy, and designed to specifically target cloud environments,” the Talos researchers found. “Some examples include ROADtools and AAAInternals, publicly available frameworks designed to enumerate Microsoft Entra ID environments. These tools can collect data on users, groups, applications, service principals, and devices, and execute commands.” These are often coupled with techniques designed to exploit the lack of MFA or incorrectly configured MFA. For example, push spray attacks, also known as MFA bombing or MFA fatigue, rely on bombing the user with MFA push notifications on their phones until they get annoyed and approve the login thinking it’s probably the system malfunctioning.


Role of Blockchain in Enhancing Cybersecurity

At its core, a blockchain is a distributed ledger in which each data block is cryptographically connected to its predecessor, forming an unbreakable chain. Without network authorization, modifying or removing data from a blockchain becomes exceedingly difficult. This ensures that conventional data records stay consistent and accurate over time. The architectural structure of blockchain plays a critical role in protecting data integrity. Every single transaction is time-stamped and merged into a block, which is then confirmed and sealed through consensus. This process provides an undeniable record of all activities, simplifying audits and boosting confidence in system reliability. Similarly, blockchain ensures that every financial transaction is correctly documented and easily accessible. This innovation helps prevent record manipulation, double-spending, and other forms of fraud. By combining cryptographic safeguards with a decentralized architecture, it offers an ideal solution to information security. It also significantly reduces risks related to data breaches, hacking, and unauthorized access in the digital realm. Furthermore, blockchain strengthens cybersecurity by addressing concerns about unauthorized access and the rising threat of cyberattacks. 


Thriving in the Second Wave of Big Data Modernization

When businesses want to use big data to power AI solutions – as opposed to the more traditional types of analytics workloads that predominated during the first wave of big data modernization–the problems stemming from poor data management snowball. They transform from mere annoyances or hindrances into show stoppers. ... But in the age of AI, this process would likely instead entail giving the employee access to a generative AI tool that can interpret a question formulated using natural language and generate a response based on the organizational data that the AI was trained on. In this case, data quality or security issues could become very problematic. ... Unfortunately, there is no magic bullet that can cure the types of issues I’ve laid out above. A large part of the solution involves continuing to do the hard work of improving data quality, erecting effective access controls and making data infrastructure even more scalable. As they do these things, however, businesses must pay careful attention to the unique requirements of AI use cases. For example, when they create security controls, they must do so in ways that are recognizable to AI tools, such that the tools will know which types of data should be accessible to which users.


The DevOps Bottleneck: Why IaC Orchestration is the Missing Piece

At the end of the day, instead of eliminating operational burdens, many organizations just shifted them. DevOps, SREs, CloudOps—whatever you call them—these teams still end up being the gatekeepers. They own the application deployment pipelines, infrastructure lifecycle management, and security policies. And like any team, they seek independence and control—not out of malice, but out of necessity. Think about it: If your job is to keep production stable, are you really going to let every dev push infrastructure changes willy-nilly? Of course not. The result? Silos of unique responsibility and sacred internal knowledge. The very teams that were meant to empower developers become blockers instead. ... IaC orchestration isn’t about replacing your existing tools; it’s about making them work at scale. Think about how GitHub changed software development. Version control wasn’t new—but GitHub made it easier to collaborate, review code, and manage contributions without stepping on each other’s work. That’s exactly what orchestration does for IaC. It allows large teams to manage complex infrastructure without turning into a bottleneck. It enforces guardrails while enabling self-service for developers. 

Daily Tech Digest - March 31, 2025


Quote for the day:

"To succeed in business it is necessary to make others see things as you see them." -- Aristotle Onassis



World Backup Day: Time to take action on data protection

“The best protection that businesses can give their backups is to keep at least two copies, one offline and the other offsite”, continues Fine. “By keeping one offline, an airgap is created between the backup and the rest of the IT environment. Should a business be the victim of a cyberattack, the threat physically cannot spread into the backup as there’s no connection to enable this daisy-chain effect. By keeping another copy offsite, businesses can prevent the backup suffering due to the same disaster (such as flooding or wildfires) as the main office.” ... “As such, traditional backup best practices remain important. Measures like encryption (in transit and at rest), strong access controls, immutable or write-once storage, and air-gapped or physically separated backups help defend against increasingly sophisticated threats. To ensure true resilience, backups must be tested regularly. Testing confirms that the data is recoverable, helps teams understand the recovery process, and verifies recovery speeds, whilst supporting good governance and risk management.” ... “With the move towards a future of AI-driven technologies, the amount of data we generate and use is set to increase exponentially. With data often containing valuable information, any loss or impact could have devastating consequences.”


5 Common Pitfalls in IT Disaster Recovery (and How to Avoid Them)

One of the most common missteps in IT disaster recovery is viewing it as a “check-the-box” exercise — something to complete once and file away. But disaster recovery isn’t static. As infrastructure evolves, business processes shift and new threats emerge, a plan that was solid two years ago may now be dangerously outdated. An untested, unrefreshed IT/DR plan can give a false sense of security, only to fail when it’s needed most. Instead, treat IT/DR as a living process. Regularly review and update it with changes to your technology stack, business priorities, and risk landscape. ... A disaster recovery plan that lives only on paper is likely to fail. Many organizations either skip testing altogether or run through it under ideal, low-pressure conditions (far from the chaos of a real crisis). When a true disaster hits, the stress, urgency, and complexity can quickly overwhelm teams that haven’t practiced their roles. That’s why regular, scenario-based testing is essential. ... Even the most robust IT disaster recovery plan can fail if roles are unclear and communication breaks down. Without well-defined responsibilities and structured escalation paths, response efforts become disorganized and slow — often when speed matters most.


How CISOs can balance business continuity with other responsibilities

The challenge for CISOs is providing security while ensuring the business recovers quickly without reinfecting systems or making rushed decisions that could lead to repeated incidents. The new reality of business continuity is dealing with cyber-led disruptions. Organizations have taken note, with 46% of organizations nominating cybersecurity incidents as the top business continuity priority ... While CISOs may find that their remit is expanding to cover business continuity, a lack of clear delineation of roles and responsibilities can spell trouble. To effectively handle business continuity, cybersecurity leaders need a framework to collaborate with IT leadership. Responding to events requires a delicate balance between thoroughness of investigation and speed of recovery that traditional business continuity plan approaches may not fit. On paper, the CISO owns the protection of confidentiality, integrity, and availability, but availability was outsourced a long time ago to either the CIO or facilities, according to Blake. “BCDR is typically owned by the CIO or facilities, but in a cyber incident, the CISO will be holding the toilet chain for the attack, while all the plumbing is provided by the CIO,” he says


Two things you need in place to successfully adopt AI

A well-defined policy is essential for companies to deploy and leverage this technology securely. This technology will continue to move fast and innovate giving automation and machines more power in organizational decision-making, and the first line of defense for companies is a clear, accessible AI policy that the whole company is aware of and subscribes to. Enforcing a security policy also means defining what risk ratings are acceptable for an organization, and the ability to reprioritize the risk ratings as the environment changes. There are always going to be errors and false positives. Different organizations have different risk tolerances or different interpretations depending on their operations and data sensitivity. ... Developers need to have a secure code mindset that extends beyond basic coding knowledge. Code written by developers needs to be clear, elegant, and secure. If it is not, it leaves that written code open for attack. Secure coding training driven by industry is, therefore, a must and must be built into an organization’s DNA, especially during a time when the already prevalent AppSec dilemma is being intensified by the current tech layoffs.


3 things haven’t changed in software engineering

Strategic thinking has long been part of a software engineer’s job, to go beyond coding to building. Working in service of a larger purpose helps engineers develop more impactful solutions than simply coding to a set of specifications. With the rise in AI-assisted coding—and, thus, the ability to code and build much faster—the “why” remains at the forefront. We drive business impact by delivering measurable customer benefits. And you have to understand a problem before you can solve it with code. ... The best engineers are inherently curious, with an eye for detail and a desire to learn. Through the decades, that hasn’t really changed; a learning mindset continues to be important for technologists at every level. I’ve always been curious about what makes things tick. As a child, I remember taking things apart to see how they worked. I knew I wanted to be an engineer when I was able to put them back together again. ... Not every great coder aspires to be a people leader; I certainly didn’t. I was introverted growing up. But as I worked my way up at Intuit, I saw firsthand how the right leadership skills could deepen my impact, even when I wasn’t charged with leading anybody. I’ve seen how quick decision making, holistic problem solving, and efficient delegation can drive impact at every level of an organization. And these assets only become more important as we fold AI into the process.


Understanding AI Agent Memory: Building Blocks for Intelligent Systems

Episodic memory in AI refers to the storage of past interactions and the specific actions taken by the agent. Like human memory, episodic memory records the events or “episodes” an agent experiences during its operation. This type of memory is crucial because it enables the agent to reference previous conversations, decisions, and outcomes to inform future actions. ... Semantic memory in AI encompasses the agent’s repository of factual, external information and internal knowledge. Unlike episodic memory, which is tied to specific interactions, semantic memory holds generalized knowledge that the agent can use to understand and interpret the world. This may include language rules, domain-specific information, or self-awareness of the agent’s capabilities and limitations. One common semantic memory use is in Retrieval-Augmented Generation (RAG) applications, where the agent leverages a vast data store to answer questions accurately. ... Procedural memory is the backbone of an AI system’s operational aspects. It includes systemic information such as the structure of the system prompt, the tools available to the agent, and the guardrails that ensure safe and appropriate interactions. In essence, procedural memory defines “how” the agent functions rather than “what” it knows.


Why Leadership Teams Need Training In Crisis Management

You don’t have the time to mull over different iterations or think about different possibilities and outcomes. You and your team need to make a decision quickly. Depending on the crisis at hand, you’ll need to assess the information available, evaluate potential risks, and make a timely decision. Waiting can be detrimental to your business. Failure to inform customers that their information was compromised during a cybersecurity attack could lead them to take their business elsewhere. ... Crisis or not, communication is how teams facilitate information and build trust. During a crisis, it’s up to the leader to communicate efficiently and effectively to the internal teams. It’s natural for panic to ensue during a time of unpredictability and stress. ... it’s not only internal communications that you’re responsible for. You also need to consider what you’re communicating to your customers, vendors, and shareholders. This is where crisis management can come in handy. While you should know how best to speak to your team, communicating externally can present itself as more challenging. ... One crisis can be the end of your business if not handled properly and considerably. This is especially the case for businesses that undergo internal crises, such as cybersecurity attacks, product recalls, or miscalculated marketing campaigns.


SaaS Is Broken: Why Bring Your Own Cloud (BYOC) Is the Future

BYOC allows customers to run SaaS applications using their own cloud infrastructure and resources rather than relying on a third-party vendor’s infrastructure. This hybrid approach preserves the convenience and velocity of SaaS while balancing cost and ownership with the control of self-hosted solutions. Building a BYOC stack that is easy to adopt, cost-effective, and performant is a significant engineering challenge. But as a software vendor, there are many benefits to your customers that make it worth the effort. ... SaaS brought speed and simplicity to software consumption, while traditional on premises offered control and predictability. But a more balanced approach is emerging as companies face rising costs, compliance challenges, and the need for data ownership. BYOC is the consolidated evolution of both worlds — combining the convenience of SaaS with the control of on premises. Instead of sending massive amounts of data to third-party vendors, companies can run SaaS applications within their cloud infrastructure. This means predictable costs, better compliance, and tailored performance. We’ve seen this hybrid model succeed in other areas. Meta’s Llama gained massive adoption as users could run it on their infrastructure. 


What Happens When AI Is Used as an Autonomous Weapon

The threat to enterprises is already substantial, according to Ben Colman, co-founder and CEO at deepfake and AI-generated media detection platform Reality Defender. “We’re seeing bad actors leverage AI to create highly convincing impersonations that bypass traditional security mechanisms at scale. AI voice cloning technology is enabling fraud at unprecedented levels, where attackers can convincingly impersonate executives in phone calls to authorize wire transfers or access sensitive information,” Colman says. Meanwhile, deepfake videos are compromising verification processes that previously relied on visual confirmation, he adds. “These threats are primarily coming from organized criminal networks and nation-state actors who recognize the asymmetric advantage AI offers. They’re targeting communication channels first because they’re the foundation of trust in business operations.” Attackers are using AI capabilities to automate, scale, and disguise traditional attack methods. According to Casey Corcoran, field CISO at SHI company Stratascale, examples include creating more convincing phishing and social engineering attacks to automatically modify malware so that it is unique to each attack, thereby defeating signature-based detection.


Worldwide spending on genAI to surge by hundreds of billions of dollars

“The market’s growth trajectory is heavily influenced by the increasing prevalence of AI-enabled devices, which are expected to comprise almost the entire consumer device market by 2028,” said Lovelock. “However, consumers are not chasing these features. As the manufacturers embed AI as a standard feature in consumer devices, consumers will be forced to purchase them.” In fact, for organizations, AI PCs could solve key issues organizations face when using cloud and data center AI instances, including cost, security, and privacy concerns, according to a study released this month by IDC Research. This year is expected to be the year of the AI PC, according to Forrester Research. It defines an AI PC as one that has an embedded AI processor and algorithms specifically designed to improve the experience of AI workloads across the central processing unit (CPU), graphics processing unit (GPU), and neural processing unit, or NPU. ... “This reflects a broader trend toward democratizing AI capabilities, ensuring that teams across functions and levels can benefit from its transformative potential,” said Tom Mainelli, IDC’s group vice president for device and consumer research. “As AI tools become more accessible and tailored to specific job functions, they will further enhance productivity, collaboration, and innovation across industries.”