Daily Tech Digest - March 10, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



The Reality of Platform Engineering vs. Common Misconceptions

In theory, the definition of platform engineering is straightforward. It's a practice that involves providing a company's software developers with access to preconfigured toolchains, workflows, and environments, typically through the use of what's called an Internal Developer Platform (IDP). The goal behind platform engineering is also straightforward: It's to help developers work more efficiently and with fewer risks by allowing them to spin up compliant, ready-made solutions whenever they need them, rather than having to implement everything from scratch. ... Misuses of the term platform engineering aren't all that surprising. A similar phenomenon occurred when DevOps entered the tech lexicon in the late 2000s. Instead of universal recognition of DevOps as a distinct philosophy that involves melding software development to IT operations work, some folks effectively began using DevOps as a catch-all term to refer to anything modern or buzzworthy in the realm of software engineering. The same thing seems to be happening now in platform engineering. The term is apparently being used, at least by some professionals, to refer to any work that involves using a platform of some kind within the context of software development.


Why AI needs a kill switch – just in case

How do you develop your “AI kill switch?” The answer lies in protecting securing the entire machine-driven ecosystem that AI depends on. Machine identities, such as digital certificates, access tokens and API keys – authenticate and authorise AI functions and their abilities to interact with and access data sources. Simply put, LLMs and AI systems are built on code, and like any code, they need constant verification to prevent unauthorised access or rogue behaviour. If attackers breach these identities, AI systems can become tools for cybercriminals, capable of generating ransomware, scaling phishing campaigns and sowing general chaos. Machine identity security ensures AI remains trustworthy, even as they scale to interact with complex networks and user bases – tasks that can and will be done autonomously via AI agents. Without strong governance and oversight, companies risk losing visibility into their AI systems, leaving them vulnerable. Attackers can exploit weak security measures, using tactics like data poisoning and backdoor infiltration – threats that are evolving faster than many organisations realise. ... Machine identity security is a critical first step – it establishes trust and resilience in an AI-driven world. This becomes even more urgent as agentic AI takes on autonomous decision-making roles across industries.


Cyber resilience under DORA – are you prepared for the challenge?

Many damaging breaches have originated from within digital supply chains, through third-party vulnerabilities, or from internal weaknesses. In 2023, third-party attacks led to 29% of breaches with 75% of third-party breaches targeting the software and technology supply chain. This evolving threat landscape has forced financial institutions to rethink their approach. The future of cyber resilience isn’t about building higher walls - it’s about securing every layer, inside and out. ... One of the most pressing concerns for financial institutions under DORA is the security of their digital supply chains. High-profile cyberattacks in recent years have demonstrated that vulnerabilities often originate not from within an organization's own IT infrastructure, but through weaknesses in third-party service providers, cloud platforms, and outsourced IT partners. DORA places a strong emphasis on third-party risk management, making it clear that security responsibility extends beyond a firm’s immediate network. Ensuring supply chain resilience requires a proactive and continuous approach. FSIs must conduct regular security assessments of all external vendors, ensuring that partners adhere to the same high standards of cybersecurity and risk management. 


Ask a Data Ethicist: How Can We Ethically Assess the Influence of AI Systems on Humans?

Bezou-Vrakatseli et al provides some guidance in this paper, which outlines the S.H.A.P.E. framework. S.H.A.P.E. stands for secrecy, harm, agency, privacy, and exogeneity. ... If you are not aware that you are being influenced or are unaware of the way in which the influence is taking place, there might be an ethical issue. The idea of intent to influence while keeping that intent a secret, speaks to ideas of deception or trickery. ... You might be wondering – what actually constitutes harm? It’s not just physical harm. There are a range of possible harms including mental health and well being, psychological safety, and representational harms. The authors note that this issue of what is harm – ethically speaking – is contestable, and that lack of consensus can make it difficult to address. ... Human agency has “intrinsic moral value” – that is to say we value it in and of itself. Thus, anything that messes with human agency is generally seen as unethical. There can be exceptions, and we sometimes make these when the human in question might not be able to act in their own best interests. ... Influence may be unethical if there is a violation of privacy. Much has been written about why privacy is valuable and why breaches of privacy are an ethical issue. The authors cite the following – limiting surveillance of citizens, restricting access to certain information, and curtailing intrusions into places deemed private or personal.


Is It Time to Replace Your Server Room with a Data Center?

Rare is the business that starts its IT journey with a full-fledged data center. The more typical route involves creating a server room first, then upgrading to a data center over time as IT needs expand. That raises the question: When should a business replace its server room with a data center? Which performance, security, cost and other considerations should a company weigh when deciding to switch? ... For some companies, the choice between a server room and a data center is clear-cut. A server room best serves small businesses without large-scale IT needs, whereas enterprises typically need a “real” data center. For medium-sized companies, the choice is often less clear. If a business has been getting by for years with just a server room, there is often no single tell-tale sign indicating it’s time to upgrade to a data center. And there is a risk that doing so will cost a lot of money without being necessary. ... A high incidence of server outages or downtime is another good reason to consider moving to a data center. That’s especially true if the outages stem from issues inherent to the nature of the server room – such as power system failures within the entire building, which are less of a risk inside a data center with its own dedicated power source.


How to safely dispose of old tech without leaving a security risk

Printers, especially those with built-in memory or hard drives, can retain copies of documents that were printed or scanned. Routers can store personal information related to network activity, including IP addresses, usernames, and Wi-Fi passwords. Meanwhile, smart TVs, home assistants (like Alexa, Google Home), and smart thermostats may store voice recordings, usage patterns, personal preferences, and even login credentials for streaming services like Netflix and Amazon Prime. As IoT devices become more common, they are increasingly at risk of storing sensitive data. ... Before disposing of a device, it’s essential to completely erase any confidential data. Deleting files or formatting the drive alone isn’t enough, as the data can still be retrieved. The best method for securely wiping data varies depending on the device. ... Windows users can use the “Reset this PC” feature with the option to remove all files and clean the drive, while macOS users can use “Erase Disk” in Disk Utility to securely wipe storage before disposal. Tools like DBAN (Darik’s Boot and Nuke) and BleachBit can also help securely erase data. DBAN is specifically designed to wipe traditional hard drives (HDDs) by completely erasing all stored data. However, it does not support solid-state drives (SSDs), as excessive overwriting can shorten their lifespan.


The great software rewiring: AI isn’t just eating everything; it is everything

Right now, most large language models (LLMs) feel like a Swiss Army knife with infinite tools — exciting but overwhelming. Users don’t want to “figure out” AI. They want solutions, AI agents tailored for specific industries and workflows. Think: legal AI drafting contracts, financial AI managing investments, creative AI generating content, scientific AI accelerating research. Broad AI is interesting. Vertical AI is valuable. Right now, LLMs are too broad, too abstract, too unapproachable for most. A blank chat box is not a product, it is homework. If AI is going to replace applications, it must become invisible, integrating seamlessly into daily workflows without forcing users to think about prompts, settings or backend capabilities. The companies that succeed in this next wave will not just build better AI models, but better AI experiences. The future of computing is not about one AI that does everything. It is about many specialized AI systems that know exactly what users need and execute on that flawlessly. ... The old software model was built on scarcity. Control distribution, limit access, charge premiums. AI obliterates this. The new model is fluid, frictionless,and infinitely scalable.


Cybersecurity: The “What”, the “How” and the “Who” of Change

Cybersecurity is more complex than that: Protecting the firm from cyberthreats requires the ability to reach across corporate silos, beyond IT, towards business and support functions, as well as digitalised supply chains. You can throw as much money as you like to the problem, but if you give it to a technologist CISO to resolve, they will address it as a technology matter. They will put ticks on compliance checklists. They will close down audit points. They will deal with incidents and put out fires. They will deploy countless tools (to the point where this is now becoming a major operational issue). But they will not change the culture of your organisation around business protection and breaches will continue to happen as threats evolve. A lot has been said and written about the role of the “transformational CISO”, but I doubt there are many practitioners in the current generation of CISOs who can successfully wear that mantel. Simply because most have spent the last decade firefighting cyber incidents and have never been able to project a transformative vision over the mid to long-term, let alone deliver it. They have not developed the type of political finesse, of personal gravitas, of leadership in one word, that they would require to be trusted and succeed at delivering a truly transformative agenda across the complex and political silos of the modern enterprise.


CISOs and CIOs forge vital partnerships for business success

“One of the characteristics of a business-aligned CISO is they don’t use the veto card in every instance,” Ijam explains. “When the CISO is at the table and understands the importance of outcomes and deliverables from a business perspective as well as risk management from a security perspective, they are able to pick their battles in a smart way.” Forging a peer CIO/CISO partnership also requires the right set of leaders. While CIOs have been honing a business orientation for years, CISOs need to follow suit, maturing into a role that understands business strategy and is well-versed in the language so they command a seat at the table. “The right CISO leader is someone that doesn’t speak in ones and zeros,” Whiteside says. “They need to be at the table talking in terms that business leaders understand — not about firewalls and malware.” Becoming a C-suite peer also means cultivating an independent voice — important because CIOs and CISOs often have varying points of view, separate priorities, and different tolerances for risk. It’s equally important to make sure the CISO’s voice — and security recommendations — are part of every discussion related to business strategy, IT infrastructure, and critical systems at the beginning, not as an afterthought.


India’s Digital Personal Data Protection Act: A bold step with unfinished business

The release of the draft Digital Personal Data Protection Rules, 2025, on 3rd of January aim to operationalise the provisions of the Act. The Act will undoubtedly go a long way in safeguarding digital personal data. Whilst the benefits to the common citizen are laudable, there are clearly areas of that need to be urgently addressed. ... The draft rules mandate data localisation, restricting the transfer of certain personal data outside India. This approach has faced criticism for potentially increasing operational costs for businesses and creating barriers to global data flows. A flexible approach could be taken with regard to data flows with Friendly and Trusted Nations. Allowing cross-border data transfers to trusted jurisdictions with robust data protection frameworks will position India as a key player in Global trade. India wants to increase exports of goods and services to achieve it’s vision of “Viksit Bharat” by 2047. ... The introduction of clear, technology-driven mechanisms for age verification without being overly intrusive need to be determined. Implementing this rule from a pragmatic perspective will be onerous. Self- declaration may turn out to be a potential way forward, given India’s massive rural population that accesses online services and platforms and the difficulty of implementing parental consent.

Daily Tech Digest - March 09, 2025


Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor


Software Development Teams Struggle as Security Debt Reaches Critical Levels

Software development teams face mounting challenges as security vulnerabilities pile up faster than they can be fixed. That's the key finding of Veracode's 15th annual State of Software Security (SoSS) report. ... According to Wysopal, several factors contribute to this prolonged remediation timeline. Growing Codebases and Complexity: As applications become larger and incorporate more third-party components, the scope for potential flaws increases, making it more time-consuming to isolate and remediate issues. Shifting Priorities: Many teams are under pressure to roll out new features rapidly. Security fixes are often deprioritized unless they are absolutely critical. Distributed Architectures: Modern microservices and container-based deployments can fragment responsibility and visibility. Coordinating fixes across multiple teams prolongs remediation. Shortage of Skilled AppSec Staff: Finding developers or security specialists with both security expertise and domain knowledge is challenging. Limited capacity can push out or delay fix timelines. ... "Many are using AI to speed up development processes and write code, which presents great risk," Wysopal said. "AI-generated code can introduce more flaws at greater velocity, unless they are thoroughly reviewed."


Want to win in the age of AI? You can either build it or build your business with it

From a business perspective, generative AI cannot operate in a technical vacuum -- AI-savvy subject matter experts are needed to adapt the technology to specific business requirements -- that's the domain expertise career track. "As AI models become more commoditized, specialized domain knowledge becomes increasingly valuable," Challapally said. "What sets true experts apart is their deep understanding of their specific industry combined with the ability to identify where and how gen AI can be effectively applied within it." Often, he warned, bots alone cannot relay such specific knowledge. ... Business leaders cite the most intense need at this time "is for professionals who bridge both worlds -- those who deeply understand business requirements while also grasping the technical fundamentals of AI," he said. Rather than pure technologists, they seek individuals who combine traditional business acumen with technical literacy. These are the type of people who can craft product visions, understand basic coding concepts, and gather sophisticated requirements that align technology capabilities with business goals." For those on the technical side, it's important "to master the art of prompting these tools to deliver accurate results," said Challapally. 


Cyber Resilience Needs an Innovative Approach: Streamlining Incident Response for The Future

Incident response has historically been a reactive process, often hampered by time-consuming manual procedures and a lack of historical and real-time visibility. When a breach is detected, security teams scramble to piece together what happened, often working with fragmented information from multiple sources. This approach is not only slow but also prone to errors, leading to extended downtime, increased costs, and sometimes, the loss of crucial data. ... The quicker an enterprise or MSSP organization can respond to an incident, the lower the risk of disruption and the less damage it incurs. An innovative approach that automates and streamlines the collection and analysis of data in near real-time during a breach allows security teams to quickly understand the scope and impact, enabling faster decision-making and minimizing downtime. ... Automation reduces the risk of human error, which is often a significant factor in traditional incident response processes – riddled with fragmented methodologies. By centralizing and correlating data from multiple sources, an automated investgation system provides a more accurate, consistent and comprehensive view of the incident, leading to better informed, more effective containment and remediation efforts.


Data Is Risky Business: Is Data Governance Failing? Or Are We Failing Data Governance?

“Data governance” has become synonymous in some areas of academic study and industry publication with the development of legislation, regulation, and standards setting out rules and common requirements for how data should be processed or put to use. It is also still considered synonymous with or a sub-category of IT Governance in much of the academic literature. And let’s not forget our friends in records and information management and their offshoot of data governance. ... While there is extensive discussion in academia and in practitioner literature about the need for people to lead on data and the importance of people performing data stewardship-type roles, there is nothing that has dug deeper to identify what we mean by “the right people.” ... In the organizations of today, however, we are dealing with business leadership and technology leadership for whom these topics simply did not exist when they were engaged in study before entering the workforce. Therefore, they operate within the mode of thinking and apply the mental models that were taught to them, or which have dominated the cultures of the organizations where they have cut their teeth and the roles they have had as they moved from entry-level to management functions to leadership roles.


How CISOs Will Navigate The Threat Landscape Differently In 2025

In 2025, resilience is the cornerstone of effective cybersecurity. The shift from a defensive mindset to a proactive approach is evident in strategies such as advanced attack surface analytics, continuous threat modeling and offensive security testing. I’ve seen many penetration testing as a service (PTaaS) providers place an emphasis on integrating continuous penetration testing with attack surface management (ASM) as an example of how organizations can stay one step ahead of adversaries. Organizations using continuous pentesting reported 30% fewer breaches in 2024 compared to those relying solely on annual assessments, showcasing the value of a proactive approach. The adoption of cybersecurity frameworks such as NIST and ISO 27001 provides a structured approach to managing risks, but these frameworks must be tailored to the unique needs of each enterprise. For example, enterprises operating in regulated industries such as healthcare, finance and critical infrastructure must prioritize compliance while addressing sector-specific vulnerabilities. CISOs are focusing on data-driven decision making to quantify risks and justify investments. By tying cybersecurity initiatives to financial outcomes, such as reduced downtime and lower breach costs, CISOs can secure buy-in from stakeholders and ensure long-term sustainability.


AI and Automation: Key Pillars for Building Cyber Resilience

AI is now moving from training to inference, helping you quickly make sense of or create a plan from the information you have. This is made possible based on improvements to how AI understands massive amounts of semi-structured data. New AI can figure out the signal from the noise, a critical step in framing the cyber resilience problem. The power of AI as a programming language combined with its ability to ingest semi-structured data opens up a new world of network operations use cases. AI becomes an intelligent helpline, using the criteria you feed it to provide guidance to troubleshoot, remediate, or resolve a network security or availability problem. You get a resolution in hours or days – not the weeks or months it would have taken to do it manually. ... AI is not the same as automation; instead, it enhances automation by significantly speeding up iteration, learning, and problem-solving processes. New AI allows you to understand the entire scope of a problem before you automate and then automate strategically. Instead of learning on the job – when you have a cyber resilience challenge, and the clock is ticking – you improve your chances of getting it right the first time. As the effectiveness of network automation increases, so too will its adoption. 


Adaptive Cybersecurity: Strategies for a Resilient Cyberspace

We are led to consider ‘systems thinking’ to address cyber risk. This approach examines how all the systems we oversee interact on a larger scale, uncovering valuable insights to quantify and mitigate cyber risk. This perspective encourages a paradigm shift and rethinking of traditional risk management practices, emphasizing the need for a more integrated and holistic approach. The evolving and sophisticated cyber risk has heightened both awareness and expectations around cybersecurity. Nowadays, businesses are being evaluated based on their preparedness, resilience and how effectively they respond to cyber risk. Moreover, it's crucial for companies to understand their disclosure obligations across market and industry levels. Consequently, regulators and investors demand that boards prioritize cybersecurity through strong governance. ... The CISO's role has evolved to include viewing cybersecurity not merely as an IT issue but as a strategic and business risk. This shift demands that CISOs possess a combination of technical expertise and strong communication skills, enabling them to bridge the gap between technology and business leaders. They should leverage predictive analytics or AI-based threat detection tools to proactively manage emerging cyber risks. 


Choosing Manual or Auto-Instrumentation for Mobile Observability

Mobile apps run on specific devices and operating systems, which means that certain operations are standard across every app instance. For example, in an iOS app built on UIKit, the didFinishLaunchingWithOptions method informs the app developer that a freshly launched app is almost ready to run. Listening for this method in any app would in turn let you observe and learn more about the completion of app launch automatically. Quick, out-of-the-box instrumentation like this is easy to use. By importing an auto-instrumentation library to your app, you can hook into the activity of your application without writing custom code. Using auto-instrumentation provides standardized signals for actions that should be recognized in a prescribed way. You could listen for app launch, as described above, but also for the loading of views, for the beginning and ends of network requests, crashes and so on. Observability would be great if imported libraries did all the work. ... However, making sense of your mobile app requires more than just monitoring the ubiquitous signals of mobile app development. For one, mobile telemetry collection and transmission can be limited by the operating system that the app user chooses, which is not designed to transmit every signal of its own. 


Planning ahead around data migrations

Understanding the full inventory of components involved in the data migration is crucial. However, it is equally essential to have a clearly defined target and to communicate this target to all stakeholders. This includes outlining the potential implications of the migration for each stakeholder. The impact of the migration will vary significantly depending on the nature of the project. For example, a simple infrastructure refresh will have a much smaller impact than a complete overhaul of the database technology. In the case of an infrastructure refresh, the primary impact might be a brief period of downtime while the new hardware is installed and the data is transferred. Stakeholders may need to adjust their workflows to accommodate this downtime, but the overall impact on their day-to-day operations should be minimal. On the other hand, a complete change of database technology could have far-reaching implications. Stakeholders may need to learn new skills to interact with the new database, and existing applications may need to be modified or even completely rewritten to be compatible with the new technology. This could result in a significant investment of time and resources, and there may be a period of adjustment while everyone gets used to the new system.


Your AI coder is writing tomorrow’s technical debt

With AI, this problem gets exponentially worse. Let’s say a machine writes a million lines of code – it can hold all of that in its head and figure things out. But a human? Even if you wanted to address a problem, you couldn’t do so. It’s impossible to sift through all that amount of code you’ve never seen before just to find where the problem might be. In our case, what made it particularly tricky was that the AI-generated code had these very subtle logical flaws: not even syntactic issues, just small problems in the execution logic that you wouldn’t notice at a glance. The volume of technical debt increases not just because of complexity, but simply because of the sheer amount of code being shipped. It’s a natural law. Even as humans, if you ship more code, you will have more bugs and you will have more debt. If you are exponentially increasing the amount of code you’re shipping with AI, then yes, maybe you catch some issues during review, but what slips through just gets shipped. The volume itself becomes the problem. ... the solution lies in far better communication throughout the whole organisation, coupled with robust processes and tooling. ... The tooling side is equally important. We’ve customised our AI tools’ settings to align with our tech stack and standards. Things like prompt templates that enforce our coding style, pre-configured with our preferred libraries and frameworks. 

Daily Tech Digest - March 08, 2025


Quote for the day:

“In my experience, there is only one motivation, and that is desire. No reasons or principle contain it or stand against it.” -- Jane Smiley


Synthetic identity blends real and fake data to enable fraud, demanding new protections

Manufactured synthetic identities merge and blend real identity details from different stolen identities. A real ID number might be paired with a fake name or address and linked to a deepfaked image that lines up with the hacked identity data. Manipulated synthetic identities are real identities that alter an existing identity document. The widespread shift toward digital identity verification and authentication processes, as illustrated by the EUDI Wallet scheme, brings new risks: “the transition to digital identity opens up new areas of attack – precisely because AI-supported fraud scams are likely to become increasingly sophisticated in the future.” ... “The rate of development of generative AI presents a problem to not just ensuring a person is who they say they are, but also to content platforms who need to be sure that the content added by a user is genuine,” says the paper. “Given the potential risks and challenges in detecting generative AI, Yoti’s strategy emphasises early detection at the source, addressing both direct and indirect attack vectors.” While presentation attacks (PAD) are a “relatively mature and well understood issue across the verification space,” well defended by effective liveness detection, more recently popularized injection attacks attempt to bypass liveness detection by hacking directly into a hardware device or virtual camera.


When to choose a bare-metal cloud

Bare-metal cloud services, by contrast, provide users with exclusive access to the underlying physical server hardware: no hypervisor, no virtual machines, no additional abstraction. This purity means full access to raw compute power, such as CPU, GPU, and memory resources, without virtualization’s added latency or restrictions. In essence, bare-metal clouds bridge the gap between the flexibility of cloud computing and the robust performance of dedicated on-premises servers. ... Certain applications can benefit from hardware architectures beyond the standard x86 processors, such as Arm’s or IBM’s Z mainframe architecture. Bare-metal clouds allow users to access these nonstandard architectures for testing or running workloads designed explicitly for them—another area where traditional virtual environments fall short. ... Government, finance, healthcare, and other regulated industries may need dedicated servers to meet regulatory or compliance mandates. Bare-metal clouds provide the necessary isolation while maintaining the flexibility of cloud deployment. ... Using bare-metal hardware often offers little room for provisioning beyond what’s physically available; no additional memory or hardware expansions can be made dynamically. 


Is Gen Z to Blame? Why Cybersecurity Feels Harder for IT Pros

Gen Z’s trust in social media is another cultural difference to be aware of. They’re not only listening to and watching a cohort of self-made influencers, but they’re also following their advice — some of which isn’t sound. Young adults glean a lot of information from social media sites and this raises a few concerns for employers. Young workers have a propensity to believe in what they learn from social media, making them susceptible to scams such as online fraud and get-rich-quick schemes. ... A younger workforce brings fresh pairs of eyes and new ideas to the table. They’re also looking for employers who reflect their preferences, including ones with familiar technologies. Chief information security officers (CISOs) are often dealing with legacy infrastructure and outdated solutions as a primary barrier preventing them from addressing cybersecurity obstacles — and hindering them from meeting Gen Z’s needs. Another challenge is that Gen Z newcomers have shorter work histories and may lack critical in-office and work-from-home experience to recognize phishing, job recruitment, social engineering and deepfake scams. Gen Z is disclosing higher rates of phishing victimization than any other generation, according to National Cyber Security Alliance.


APM Tools and High-Availability Clusters: A Powerful Combination for Network Resiliency

APM tools are well-positioned as a means of feeding better data into the platforms enterprises use to monitor and manage IT infrastructure. Data provided by APM provides a more precise understanding of system health, enabling IT management to establish more precise parameters for making decisions with the confidence of good, timely data. High availability clusters are either hardware (SAN-based clusters) or software (SANless clusters) that support seamless failover of services to backup resources in the event of an incident. ... The combination of APM and HA makes it easier for enterprises to improve network resiliency by supporting and injecting better decision making and the use of automation to enable seamless failover, predictive analytics, self-healing, and other capabilities consistent with maximizing network performance, uptime, and operational resilience. When used in a multi-cloud environment, services can failover to the organization's secondary cloud provider, which is a major advantage when an outage affects a cloud services provider. ... As some enterprises evolve toward autonomous IT, data provided by APM provides a more precise understanding of system health, enabling IT management to establish more precise parameters for making decisions with confidence. 


Why Enterprise Architecture Is Having A Moment

One can think of enterprise architecture as the description and design of the complex web of technologies that supports a particular set of business capabilities. I say “description” because most companies don’t initially have an enterprise architect. Instead, they let their technology landscape grow organically. ... Think about everything an organization would need to do to move from its current state to one that reflected “modern standards.” Describing and inventorying the current state would naturally be part of that. But more important would be defining those standards. In today’s world, such standards would include prioritizing cloud technology, adopting a service-oriented architecture for software built in-house, working with open APIs, and so forth. Enterprise architects are in the business of both defining the technology standards for the business as well as governing the adoption of new and emerging technology in conformity with those standards. ... But the definition of standards does not take place in a vacuum. Instead, this work is guided by the strategic aims of the organization. These aims, in turn, can be viewed through the lens of business capabilities. Specifically, the business must determine what capabilities it will need to realize its strategy in the future. 


Bridging Europe's Cybersecurity Divide Through Political Will

The debate over cybersecurity regulation has been contentious in recent years, with strong positions on all sides. Europe has introduced multiple pieces of regulation, which has led to growing complaints about overlapping requirements and duplications. Which regulations apply to my company, among all existing ones? Which frameworks should I use to improve security and then demonstrate compliance? Which authorities should I report incidents to? Is there a standardized approach to managing and monitoring third parties? ... There is a broad consensus that cybersecurity regulatory requirements should be improved in Europe and beyond. We need to build an effective and efficient legislative framework for both functional and political reasons. On one hand, resources are limited and have to be allocated efficiently to meaningful security measures. On the other hand, frustration with redundant or unclear requirements risks undermining the achievements achieved so far, empowering those who oppose regulation entirely. ... While these operations require time and resources, the main obstacle is not technology. The real challenge lies in negotiating and agreeing on what an efficient system looks like in terms of governance and minimum standards to follow. 


What is risk management? Quantifying and mitigating uncertainty

Risk management is the process of identifying, analyzing, and mitigating uncertainties and threats that can harm your company or organization. No business venture or organizational action can completely avoid risk, of course, and working too hard to do so would mean foregoing potentially lucrative opportunities and strategies. ... IT leaders in particular must be able to integrate risk management philosophies and techniques into their planning, as IT infrastructure and spending can represent within the company an intense combination of risk (of cyberattacks, downtime, or botched rollouts, for instance) and benefits realized as increased capabilities or efficiencies. Some companies, particularly those in heavily regulated industries, such as banks and hospitals, centralize risk in a single department under a top-level chief risk officer (CRO) or similar executive role. A CRO might find themselves with responsibilities that overlap or conflict with CSOs, CISOs, and CIOs, and in some orgs without a clearly defined risk leader, ambitious infosec or infosecurity execs might try to take on that role for themselves. In any case, IT leaders need to understand and apply risk management in the areas under their purview.


Why Using Multiple AIs Is Trending Now

“Companies are building sophisticated AI stacks that treat general-purpose LLMs as foundational utilities while deploying specialized AI copilots and agents for coding, design, analytics, and industry-specific tasks. This fragmentation exposes the hubris of incumbent AI companies marketing themselves as complete solutions,” Moy adds. ... “Multimodality may sound like a remedy for generative AI’s shortcomings in multifaceted processes, but this, too, is more effective in the context of purpose-specific models,” says Maxime Vermeir. “Multimodality doesn’t imply an AI multitool that can excel in any area, but rather an AI model that can draw insights from various forms of ‘rich’ data beyond just text, such as images or audio. Still, this can be narrowed for businesses’ benefit, such as accurately recognizing images included in specific document types to further increase the autonomy of a purpose-built AI tool. While having multiple generative AI tools may sound more cumbersome than a single catch-all solution, the difference in ROI is undeniable,” Vermeir adds. ... “Using the different language models in the same tool has multiple reasons, the main ones being that every model has its strengths and weaknesses and therefore different types of queries to ChatGPT may be handled better or worse depending on the model. ... ” Feinberg adds.


8 obstacles women still face when seeking a leadership role in IT

When women are subjected to undermining stereotypes, have few female role models, are spoken over, or treated as if their contribution isn’t welcome, imposter syndrome is difficult to avoid. “When a woman looks at a job, she’s only going to apply if she meets 90% of the criteria,” agrees Debby Briggs, CISO at NetScout. ... Being seen as an outsider also costs women opportunities, since leaders tend to promote people they know. All the women I spoke to told me they survive this by building their own network. ... “A mentor can provide guidance, and a sponsor is someone who actively opens doors for you.” “This is a must-have,” says Briggs, who adds that she collects mentors. Anytime she finds someone she admires or who has a skill she lacks, she reaches out. “Your mentors don’t have to be women,” she says. ... Women say they feel invisible. “If I am at a tech event standing next to a man and another man walks up to us, more than 50% of the time he will address the man,” says Briggs. This invisibility happens in small interactions and large ones. The website for tech companies is often filled with the faces of white men. The speakers at tech events are all male. How do you scale this obstacle? “If someone invites me to an event, I look at who is on the panel. If it’s all white men, I tell them they don’t have a diverse enough perspective and choose not to go,” says Briggs.


How To Handle "Urgent Request" in Scrum

The first step the Product Owner needs to take is to assess whether the request aligns with the current Sprint Goal. However, based on my experience, most 'urgent requests' are unrelated to the Sprint Goal. They often come from individuals who are detached from the Scrum team's way of working. In many cases, those people are not even aware of what a 'Sprint Goal' is. If the request does not align with the Sprint Goal, I use a tool called the Financial Impact vs. Reputation Impact Matrix. As a Product Owner, I want the impact or potential damage to the company to be visualized in two dimensions so that I do not make decisions based on a single factor. The main purpose of this tool is to quantify the urgency of those "urgent requests." As a Product Owner, we do not want our team to work based on opinions or, even worse, political power; we want them to work based on facts or data. Many Scrum teams order their Product Backlog based on value, and they use potential revenue as the value attribute. Unlike potential revenue, which is expressed in positive terms, financial impact and reputation impact are negative. If the impact is not negative, as a Product Owner, I would not consider the request as urgent. Instead, it can wait and be stored in the Product Backlog for further discussion. 

Daily Tech Digest - March 07, 2025


Quote for the day:

"The actions of a responsible executive are contagious." -- Joe D. Batton


Operational excellence with AI: How companies are boosting success with process intelligence everyone can access

The right tooling can make a company’s processes visible and accessible to more than just its process experts. With strategic stakeholders and lines of business users involved, the very people who best know the business can contribute to innovation, design new processes and cut out endless wasted hours briefing process experts. AI, essentially, lowers the barrier to entry so everyone can come into the conversation, from process experts to line-of-business users. This speeds up time-to-value in transformation. ... Rather than simply ‘survive,’ companies can use AI to build true resilience — or antifragility — in which they learn from system failures or cybersecurity breaches and operationalize that knowledge. By putting AI into the loop on process breaks and testing potential scenarios via a digital twin of the organization, non-process experts and stakeholders are empowered to mitigate risk before escalations. ... Non-process experts must be able to make data-driven decisions faster with AI powered insights that recommend best practices and design principles for dashboards. Any queries that arise should be answered by means of automatically generated visualizations which can be integrated directly into apps — saving time and effort. 


Why Security Leaders Are Opting for Consulting Gigs

CISOs are asked to balance business objectives alongside product and infrastructure security, ransomware defense, supply chain security, AI governance, and compliance with increasingly complex regulations like the SEC's cyber-incident disclosure rules. Increased pressure for transparency puts CISOs in a tough situation when they must choose between disclosing an incident that could have adverse effects on the business or not disclosing it and risking personal financial ruin. ... The vCISO model emerged as a practical solution, particularly for midsize companies that need executive-level security expertise but can't justify a full-time CISO's compensation package. ... The surge in vCISOs should serve as a warning to boards and executives. If you're struggling to retain security leadership or considering a virtual CISO, you need to examine why. Is it about flexibility and cost, or have you created an environment where security leaders can't succeed? The pendulum will inevitably swing back as organizations realize that effective security leadership requires consistent, dedicated attention. ... Your CISO is working hard to protect your organization. So who will protect your CISO? Now is a great time to check in on them. Make sure they feel like they're fighting a winnable fight. 


How to Build a Reliable AI Governance Platform

An effective AI governance platform includes four fundamental components: data governance, technical controls, ethical guidelines and reporting mechanisms, says Beena Ammanath, executive director of the Global Deloitte AI Institute. "Data governance is necessary for ensuring that data within an organization is accurate, consistent, secure and used responsibly," she explains in an online interview. Technical controls are essential for tasks such as testing and validating GenAI models to ensure their performance and reliability, Ammanath says. "Ethical and responsible AI use guidelines are critical, covering aspects such as bias, fairness, and accountability to promote trust across the organization and with key stakeholders." ... "AI governance requires a multi-disciplinary or interdisciplinary approach and may involve non-traditional partners such as data science and AI teams, technology teams for the infrastructure, business teams who will use the system or data, governance and risk and compliance teams -- even researchers and customers," Baljevic says. Clark advises working across stakeholder groups. "Technology and business leaders, as well as practitioners -- from ML engineers to IT to functional leads -- should be included in the overall plan, especially for high-risk use case deployments," she says.


Reality Check: Is AI’s Promise to Deliver Competitive Advantage a Dangerous Mirage?

What happens when AI makes our bank’s products completely commoditized and undifferentiated? It’s not a defeatist question for the industry. Instead, it suggests a shortcoming in bank and credit union strategic planning about AI, Henrichs says. "Everyone’s asking about efficiency gains, risk management, and competitive advantages from AI," he suggests. "The uncomfortable truth is that if every bank has access to the same AI capabilities [and increasingly do through vendors like nCino, Q2, and FIS], we’re racing toward commoditization at an unprecedented speed." ... How can boards lead the institution to use AI to amplify existing competitive advantages? It’s not just about the technology. It’s "the combination of technology stack," say Jim Marous, Co-Publisher of The Financial Brand, with "people, leadership and willingness to take risks that will result in the quality of AI looking far different from bank A to bank Z. AI [is about] rethinking what we do. Further, fast follower doesn’t cut it because trying to copy… ignores the fundamental strategic changes [happening] behind the scenes." Creativity is not exactly a top priority in an industry accountable day-in and day-out to regulators, yet it’s required as technology applies commoditization pressure. 


A strategic playbook for entrepreneurs: 4 paths to success

To make educated choices as an entrepreneur, Scott and Stern recommend a sequential learning process known as test two, choose one for the four strategies within the compass. This is a systematic process where entrepreneurs consider multiple strategic alternatives and identify at least two that are commercially viable before choosing just one. As the authors write in their book, “The intellectual property and architectural strategies are worth testing for entrepreneurs who prefer to put in the work developing and maintaining proprietary technology; meanwhile, value chain and disruption may work better for leaders looking to execute quickly.” Scott referred to Vera Wang as a classic example of sequential learning. As a Ralph Lauren employee and bride-to-be at 35, Wang told her team that she felt there was an untapped market for older women shopping for wedding dresses. The company disagreed, so Wang opened her own shop — but she didn’t launch her line of dresses immediately. Instead, Scott said, Wang filled her shop with traditional dresses and offered only one new dress of her own. The goal was to see which types of customers were interested, as well as which aesthetics ultimately sold, before she started designing her new line. “[Wang] was able to take what she learned about design, customer, messaging, and price point and build it into her venture,” Scott said.


Increasing Engineering Productivity, Develop Software Fast and in a Sustainable Way

The real problem comes when speed means cutting corners - skipping tests, ignoring telemetry, rushing through code reviews. That might seem fine in the moment, but over time, it leads to tech debt and makes development slower, not faster. It’s kind of like skipping sleep to get more done. One late night? No problem. But if you do it every night, your productivity tanks. Same with software - if you never take time to clean up, everything gets harder to change. ... Software engineering productivity and sustainability are influenced by many factors and can mean different things to different people. For me, the two primary drivers that stand out are code quality and efficient processes. High-quality code is modular, readable, and well-documented, which simplifies maintenance, debugging, and scaling, while reducing the burden of technical debt. ... if the developers are not complaining enough, it’s probably because they’ve become complacent with, or resigned to, the status quo. In those cases, we can adopt the "we’re all one team" mindset and actually help them deliver features for a while – on the very clear understanding that we will be taking notes about everything that causes friction and then going and fixing that. That’s an excellent way to get the ground truth about how development is really going: listening, and hands-on learning.


Rethinking System Architecture: The Rise of Distributed Intelligence with eBPF

In an IT world driven by centralized decision-making, gathering insights and applying intelligence often follows a well-established — yet limiting — pattern. At the heart of this model, large volumes of telemetry, observability, and application data are collected by “dumb” data collectors. For analysis, these collectors gather information and ship it to centralized systems, such as databases, security information, event management (SIEM) platforms, or data warehouses. ... By processing data at its origin, we significantly reduce the amount of unnecessary or irrelevant data sent over the network, resulting in lower information transfer overhead. This minimizes the load on the infrastructure itself and cuts down on data storage and processing requirements. The scalability of our systems no longer needs to hinge on the ability to expand storage and analytics power, which is both expensive and inefficient. With eBPF, distributed systems can now analyze data locally, allowing the system to scale out more efficiently as each node can handle its own data processing needs without overwhelming a centralized point of control — and failure. Instead of transferring and storing every piece of data, eBPF can selectively extract the most relevant information, reducing noise and improving the overall signal quality.


How Explainable AI Is Building Trust in Everyday Products

Explainable AI has already picked up tremendous momentum in almost every industry. E-commerce platforms are now starting to avail detailed insight to the user on why a certain product is recommended to them. This reduces decision fatigue and improves the overall shopping experience. Even streaming services such as Netflix and Spotify make suggestions like “Because you watched…” or “Inspired by your playlist.” These insights make users much more connected with what they consume. In healthcare and fitness, the stakes are higher. Users literally rely on apps for critical insight into their health and well-being. Take a dietary suggestion or an exercise recommendation: If explainable AI provides insight into the whys, then users are more likely to feel knowledgeable and confident in those decisions. Even virtual assistants like Alexa and Google Assistant have added explainability features that provide much-needed context for their suggestions and enhance the user experience. ... Explainable AI has quite a number of challenges that stand in the way of its implementation. The need for simplifying such a very complex AI decision to some explainable form consumable by users is not a trivial task. The balance lies in clear explanations without oversimplification or misrepresentation of the logic.


IT execs need to embrace a new role: myth-buster

It’s more imperative than ever that IT leaders from the CIO on down educate their colleagues. It’s far too easy for eager early adopters to get into tech trouble, and it’s better to head off problems before your corporate data winds up, say, being used to train a genAI model. This teaching role is critical for high-ranking execs (C-level execs, board members) in addition to those on the enterprise front lines. CFOs tend to fall in love with promised efficiencies and would-be workforce reductions without understanding all of the implications. CEOs often want to support what their direct reports want — when possible — and board members rarely have any in-depth knowledge of technology issues. It’s especially critical for IT Directors, working with the CIO, to become indispensable sources of tech truth for any company. Not so long ago, business units almost always had to route their technology needs through IT. No more. It’s not a battle that can be won by edicts or directives. IT directives are often ignored by department heads, and memo mayhem won’t help. You have to position your advice as cautionary, educational — helpful even — all in a bid to spare the business unit various disasters. You are their friend. Only then does it have a chance of working. 


Increased Investment in Industrial Cybersecurity Essential for 2025

“The software used in machine controls and other components should be continuously updated by manufacturers to close newly discovered security gaps,” said the CEO of ONEKEY. He cites typical examples such as manufacturing robots, CNC machines, conveyors, packaging machines, production equipment, building automation systems, and heating and cooling systems, which, in some cases, rely on outdated software, making them targets for hackers. ... Firmware, the software embedded in digital control systems, connected devices, machines, and equipment, should be systematically tested for cyber resilience, advises Jan Wendenburg, CEO of ONEKEY. However, according to a report, less than a third (31 percent) of companies regularly conduct security checks on the software integrated into connected devices to identify and close vulnerabilities, thereby reducing potential entry points for hackers. ... Current practices fall far behind the required standards, as shown by the “OT + IoT Cybersecurity Report” by ONEKEY. ... “Manufacturers should align their software development with the upcoming regulatory requirements,” advised Jan Wendenburg. He added, “It is also recommended that the industry requires its suppliers to guarantee and prove the cyber resilience of their products.”

Daily Tech Digest - March 06, 2025


Quote for the day:

"Great leaders do not desire to lead but to serve." -- Myles Munroe


RIP (finally) to the blockchain hype

Fowler is not alone in his skepticism about blockchain. It hasn’t yet delivered practical benefits at scale, says Salome Mikadze, co-founder at software development firm Movadex. Still, the technology is showing promise in some niche areas, such as secure data sharing or certain supply chain scenarios, she says. “Most of us agree that while it’s an exciting idea, its real-world applications are still limited,” Mikadze adds. “In short, blockchain is on the shelf for now — something we check in on periodically, but not a priority until it proves its worth in the real world.” The crazy hype around digital art NFTs turned blockchain into a bit of a joke, adds Trevor Fry, an IT consultant and fractional CTO. Many organizations haven’t found other uses for blockchain, he says. “Blockchain was marketed as this must-have innovation, but in practice, it doesn’t solve a problem that many companies or people have,” he says. “Unlike AI and LLMs, which have real-world applications across industries and have such a low barrier to entry that everyone can easily try it, blockchain’s use cases are very niche, though not useless.” Fry sees eventual benefits in supply chain tracking and data integrity, situations where a secure and decentralized record can matter. “But right now, it’s not solving a big enough pain point for most organizations to justify the complexity and cost and hiring people who know how to develop and work with it,” he adds. 


The 5 stages of incident response grief

Starting with denial and moving through anger, bargaining, depression, and acceptance, security experts can take a few lessons from the grieving process ... when you first see the evidence of an incident in progress, you might first consider alternate explanations. Is it a false alarm? Did an employee open the wrong application by mistake? Maybe an automated process is misfiring, or a misconfiguration is causing an alert to trigger. You want to consider your options before assuming the worst. ... Once you confirm that it isn’t a false alarm and there is, in fact, an attacker present in the system, your first thought is probably, “this is going to consume the next few days, weeks, or months of my life.” You may become angry at a specific team for not following security guidelines or shortcutting a process. ... Sadly, getting an intruder out of your system is rarely a quick and easy process. But understanding the layout of your digital landscape and working with stakeholders throughout the organization can help ensure you’re making the right decisions at the right time. ... With the recovery process well underway, it’s time to take what you’ve learned and apply it. Now is the time to start bringing in all those suppressed thoughts from the former stages. That begins with understanding what went wrong. What was the cyber kill chain? What vulnerabilities did they exploit to gain access to certain systems? How did they evade detection solutions? Are certain solutions not working as well as they should? 


How to Manage Software Supply Chain Risks

Developers can’t manage risks on their own, nor can CISOs. “Effectively protecting, defending and responding to supply chain events should be a combination among many departments [including] security, IT, legal, development, product, etc.,” says Ventura. “Not one department should fully own the entire supply chain program as it touches many business units within an organization. Spearheading the program typically falls under the CISO or the security team as cybersecurity risks should be considered business risks.” One of the most common mistakes is having a false sense of security. “Thinking with the mindset of, ‘If I haven't had a supply chain issue before, why fix it now?’ leads to complacency and a lack of taking cybersecurity serious throughout the business,” says Ventura. “Another common mistake is organizations relying too heavily on vendor-assessments, where an organization can say they are secure, but haven't put in robust controls. Trusting an assessment completely without verification can lead to major issues down the road.” By failing to focus on supply chain risks, organizations put themselves at a high risk of a data breach, financial loss, regulatory and compliance fines and business and reputational damage. 


FinOps for Software as a Service (SaaS)

The challenges of managing public cloud spending are mirrored in the proliferation of SaaS across organizations through the use of decentralized, individual-level procurement and corporate-credit-card-funded purchase orders, resulting in limited organizational-level visibility into cost and usage. Additionally, SaaS is a consideration in the typical Build-vs-Buy-vs-Rent discussions. Often, engineers have a choice between building their own solutions or purchasing one via a SaaS provider. Because of this, there is less of a clear distinction between what workloads are managed in Public Cloud versus workloads managed by SaaS vendors (or where they are shared). Therefore, the spend is all part of the same value creation process, and engineering teams want to know the total cost of running their solutions. And naturally, the other FinOps goals and outcomes follow. By iteratively applying Framework Capabilities to achieve the outcomes described by the four FinOps Domains: to Understand our Cost & Usage, to Quantify its Business Value, to Optimize our Cost & Usage, and to effectively Manage the FinOps Practice, the same financial accountability and transparency can be established for SaaS spending, ensuring organizations can keep their SaaS costs aligned with business goals and associated technology strategy.


The role of data centres in national security

The UK government’s recent decision to designate certain data centres as Critical National Infrastructure (CNI) represents a significant shift in recognising their role in safeguarding the nation’s essential services. Data centres are the backbone of industries like healthcare, finance and telecommunications, placing them at increased risk of cyberattacks. While this move enhances protection for specific facilities, it also raises important questions for the wider industry. ... A critical first step for data centres is to conduct a thorough security audit. This process helps to create a complete inventory of all endpoints across both OT and IT environments, including legacy devices that may have been overlooked. Understanding the scope of connected systems and their potential vulnerabilities provides a clear foundation for implementing effective security measures. Once an inventory is established, technologies like Endpoint Detection and Response (EDR) can be deployed to monitor critical endpoints, including servers and workstations, for signs of malicious activity. EDR solutions enable rapid containment of threats, preventing them from spreading across the network. Extended Detection and Response (XDR) builds on this by unifying threat detection across endpoints, networks and servers, offering a holistic view of vulnerabilities and enabling more comprehensive protection.


Will the future of software development run on vibes?

When it comes to defining what exactly constitutes vibe coding, Willison makes an important distinction: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant." Vibe coding, by contrast, involves accepting code without fully understanding how it works. While "vibe coding" originated with Karpathy as a playful term, it may encapsulate a real shift in how some developers approach programming tasks—prioritizing speed and experimentation over deep technical understanding. And to some people, that may be terrifying. Willison emphasizes that developers need to take accountability for their code: "I firmly believe that as a developer you have to take accountability for the code you produce—if you're going to put your name to it you need to be confident that you understand how and why it works—ideally to the point that you can explain it to somebody else." He also warns about a common path to technical debt: "For experiments and low-stake projects where you want to explore what's possible and build fun prototypes? Go wild! But stay aware of the very real risk that a good enough prototype often faces pressure to get pushed to production."


How the Emerging Market for AI Training Data is Eroding Big Tech’s ‘Fair Use’ Copyright Defense

“It would be impossible to train today’s leading AI models without using copyrighted materials,” the company wrote in testimony submitted to the House of Lords. “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.” Missed in OpenAI’s pleading was the obvious point: Of course AI models need to be trained with high-quality data. Developers simply need to fairly remunerate the owners of those datasets for their use. One could equally argue that “without access to food in supermarkets, millions of people would starve.” Yes. Indeed. But we do need to pay the grocer. ... Anthropic, developer of the Claude AI model, answered a copyright infringement lawsuit one year ago by arguing that the market for training data simply didn’t exist. It was entirely theoretical—a figment of the imagination. In federal court, Anthropic submitted an expert opinion from economist Steven R. Peterson. “Economic analysis,” wrote Peterson, “shows that the hypothetical competitive market for licenses covering data to train cutting-edge LLMs would be impracticable.” Obtaining permission from property owners to use their property: So bothersome and expensive.


3 Ways FinOps Strategies Can Boost Cyber Defenses

By providing visibility into cloud costs, FinOps uncovers underutilized or redundant resources and subscriptions, or over-provisioned budgets that can be redirected to strengthen cybersecurity. Through continuous real-time monitoring, organizations can proactively identify trends, anomalies, or emerging inefficiencies, ensuring they align their resources with strategic goals. For example, regular audits may uncover unnecessary overlapping subscriptions or unused security features, while ongoing monitoring ensures these inefficiencies do not reoccur. ... A FinOps approach also involved continuous monitoring, which not only identifies potential security gaps before they escalate but also matches security measures with organizational goals. Furthermore, FinOps helps with financial risk management by assessing the costs of potential breaches and allocating resources effectively. Through ongoing risk assessments and strategic budget adjustments, organizations can make better use of their security investments, which will help to maintain a robust defense against threats while still achieving their business aims. ... Moreover, governance frameworks are built into FinOps principles, which leads to consistent application of security policies and procedures. This includes setting up governance frameworks that define roles, responsibilities, and accountability for security and financial management.


Black Inc has asked authors to sign AI agreements. But why should writers help AI learn how to do their job?

Writers were reportedly asked to permit Black Inc the ability to exercise key rights within their copyright to help develop machine learning and AI systems. This includes using the writers’ work in the training, testing, validation and subsequent deployment of AI systems. The contract is offered on an opt-in basis, said a Black Inc spokesperson, and the company would negotiate with “reputable” AI companies. But authors, literary agents and the Australian Society of Authors have criticised the move. “I feel like we’re being asked to sign our own death warrant,” said novelist Laura Jean McKay. ... In theory, the licensing solution should hold true for authors, publishers and AI companies. After all, a licensing system would offer a stream of revenue. But in reality there might just be a trickle of income for authors and the basis for providing it under existing laws might be quite weak. Authors and publishers are depending on copyright law to protect them. Unfortunately, copyright law works in relation to copying, not on the development of capabilities in probability-driven language outputs. ... To put it another way, once the AI has learned how to write, it has acquired that capability. It is true AI can be manipulated to produce output that reflects copyright protected content. 


Outsmarting Cyber Threats with Attack Graphs

An attack graph is a visual representation of potential attack paths within a system or network. It maps how an attacker could move through different security weaknesses - misconfigurations, vulnerabilities, and credential exposures, etc. - to reach critical assets. Attack graphs can incorporate data from various sources, continuously update as environments change, and model real-world attack scenarios. Instead of focusing solely on individual vulnerabilities, attack graphs provide the bigger picture - how different security gaps, like misconfigurations, credential issues, and network exposures, could be used together to pose serious risk. Unlike traditional security models that prioritize vulnerabilities based on severity scores alone, attack graphs loop in exploitability and business impact. The reason? Just because a vulnerability has a high CVSS score doesn't mean it's an actual threat to a given environment. Attack graphs add critical context, showing whether a vulnerability can actually be used in combination with other weaknesses to reach critical assets. Attack graphs are also able to provide continuous visibility. This, in contrast to one-time assessments like red teaming or penetration tests, which can quickly become outdated.