Daily Tech Digest - March 13, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein


Becoming an AI-First Organization: What CIOs Must Get Right

"The three pillars of an AI-first organization are data, infrastructure and people. Data must be treated as a strategic asset with robust quality, privacy and security standards," Simha said. Along with responsible AI, responsible data management is equally crucial. When implemented effectively, data privacy, regulatory compliance, bias and security do not pose issues to an AI-first organization. Yeo described the AI-first approach as both a journey and a destination. "Just using AI tools doesn't make you AI-first. Organizations must explore AI's full potential." He compared today's AI evolution to the early days of the internet. "Decades ago, businesses knew they had to go online but didn't know how. Now, if you're not online, you're obsolete. AI is following the same trajectory - it will soon be indispensable for business success." ... Simha stressed the importance of enterprise architecture in AI deployment. "AI success depends on how well data flows across an organization. Organizations must select the right architecture patterns - real-time data processing requires a Kappa architecture, while periodic reporting benefits from a Lambda approach. A well-designed data foundation is crucial," Simha said. As AI adoption grows, ethical concerns and regulatory compliance remain critical considerations. 


From Box-Ticking to Risk-Tackling: Evolving Your GRC Beyond Audits

The problem, though, is that merely passing an audit does not necessarily mean a business is doing all it can to mitigate its risks. On their own, audits can fall short of driving full GRC maturity for several reasons ... Auditors are generally outsiders to the businesses they audit — which is good in the sense that it makes them objective evaluators. But it can also lead to situations where they have a limited understanding of what's really going on within a company's GRC practices and are beholden to the information provided by the company's team members on the other side of the assessment table. They may not ask the questions needed to gain adequate understanding to assess and find gaps, ultimately overlooking pitfalls that only insiders know about, and which would become obvious only following a higher degree of scrutiny than a standardized audit. ... But for companies that have made advanced GRC investments, such as automations that pull data from across a diverse set of disparate systems, deeper scrutiny will help validate the value that these investments have created. It may also uncover risk management weak points that the business is overlooking, allowing it to strengthen its GRC program even further. It's generally OK, by the way, if your business submits itself to a high degree of risk management scrutiny, only to fail the assessment because its controls are not as robust as it expected. 


How to use ChatGPT to write code - and my favorite trick to debug what it generates

After repeated tests, it became clear that if you ask ChatGPT to deliver a complete application, the tool will fail. A corollary to this observation is that if you know nothing about coding and want ChatGPT to build something, it will fail. Where ChatGPT succeeds -- and does so very well -- is in helping someone who already knows how to code to build specific routines and get tasks done. Don't ask for an app that runs on the menu bar. But if you ask ChatGPT for a routine to put a menu on the menu bar, and paste that into your project, the tool will do quite well. Also, remember that, while ChatGPT appears to have a tremendous amount of domain-specific knowledge (and often does), it lacks wisdom. As such, the tool may be able to write code, but it won't be able to write code containing the nuances for specific or complex problems that require deep experience. Use ChatGPT to demo techniques, write small algorithms, and produce subroutines. You can even get ChatGPT to help you break down a bigger project into chunks, and then you can ask it to help you code those chunks. ... But you can do several things to help refine your code, debug problems, and anticipate errors that might crop up. My favorite new AI-enabled trick is to feed code to a different ChatGPT session (or a different chatbot entirely) and ask, "What's wrong with this code?"


How AI-enabled ‘bossware’ is being used to track and evaluate your work

Employee monitoring tools can increase efficiency with features such as facial recognition, predictive analytics, and real-time feedback for workers, allowing them to better prioritize tasks and even prevent burnout. When AI is added, the software can be used to track activity patterns, flag unusual behavior, and analyze communication for signs of stress or dissatisfaction, according to analysts and industry experts. It also generates productivity reports, classifies activities, and detects policy violations. ... LLMs are often used in predicting employee behaviors, including the risk of quitting, unionizing, or other actions, Moradi said. However, their role is mostly in analyzing personal communications, such as emails or messages. That can be tricky, because interpreting messages across different people can lead to incorrect inferences about someone’s job performance. “If an algorithm causes someone to be laid off, legal recourse for bias or other issues with the decision-making process is unclear, and it raises important questions about accountability in algorithmic decisions,” she said. The problem, Moradi explained, is that while AI can make bossware more efficient and insightful, the data being collected by LLMs is obfuscated. “So, knowing the way that these decisions [like layoffs] are made are obscured by these, like, black boxes,” Moradi said.


Attackers Can Manipulate AI Memory to Spread Lies

By crafting a series of seemingly innocuous prompts, an attacker can insert misleading data into an AI agent's memory bank, which the model later relies on to answer unrelated queries from other users. Researchers tested Minja on three AI agents developed on top of OpenAI's GPT-4 and GPT-4o models. These include RAP, a ReAct agent with retrieval-augmented generation that integrates past interactions into future decision-making for web shops; EHRAgent, a medical AI assistant designed to answer healthcare queries; and QA Agent, a custom-built question-answering model that reasons using Chain of Thought and is augmented by memory. A Minja attack on the EHRAgent caused the model to misattribute patient records, associating one patient's data with another. In the RAP web shop experiment, a Minja attack tricked the AI into recommending the wrong product, steering users searching for toothbrushes to a purchase page for floss picks. The QA Agent fell victim to manipulated memory prompts, producing incorrect answers to multiple-choice questions based on poisoned context. Minja operates in stages. An attacker interacts with an AI agent by submitting prompts that contain misleading contextual information. Referred to as indication prompts, they appear to be legitimate but contain subtle memory-altering instructions. 


CISOs, are your medical devices secure? Attackers are watching closely

“To truly manage and prioritize risks, organizations need to look beyond technical scores and consider contextual risk factors that impact operations related to patient care. This can include identifying devices in critical care areas, legacy devices close to or past their end-of-life status, where any insecure communication protocols are, and how sensitive personal information is being stored,” Greenhalgh added. ... “For CISOs, the priority should be proactive engagement. First, implement real-time vulnerability tracking and ensure security patches can be deployed quickly without disrupting device functionality. Medical device security must be continuous—not just a checkpoint during development or regulatory submission. Second, regulatory alignment isn’t a one-time effort. The FDA now expects ongoing vulnerability monitoring, coordinated disclosure policies, and robust software patching strategies. Automating security processes—whether for SBOM (Software Bill of Materials) management, dependency tracking, or compliance reporting—reduces human error and improves response times. An SBOM is valuable not just for compliance but as a tool for tracking and mitigating vulnerabilities throughout a device’s lifecycle,” Ken Zalevsky, CEO of Vigilant Ops explained.


Can AI Teach You Empathy?

By leveraging AI-driven insights, banks can tailor their training programs to address specific skill gaps and enhance employee development. However, AI isn’t infallible, and it’s crucial for banks to implement tools that not only support learning but also foster a reliable and effective training environment. Striking the right balance between AI-driven training and human oversight ensures that these tools enhance employee growth without compromising accuracy or effectiveness. ... Experiential learning has long been a cornerstone of learning and development. Students, for example, who participate in experiential learning often develop a deeper understanding of the material and achieve statistically better outcomes than those who do not. While AI may not perfectly replicate a customer’s response, it provides new employees with a valuable opportunity to practice handling complex issues before interacting with real customers. AI-powered versions of these trainings can make it more accessible, allowing more employees to benefit. ... Many employees find it challenging to incorporate AI into their daily tasks and may need guidance to understand its value, especially in managing customer interactions. Some may also be resistant, fearing that AI could eventually replace their jobs, Huang says.


The Missing Piece in Platform Engineering: Recognizing Producers

The evolution of technology has shown us time and again that those who innovate are the ones who shape the future. Alan Kay’s words resonate strongly in the modern era, where software, artificial intelligence, and digital transformation continue to drive change across industries. ... “A Platform is a curated experience for engineers (the platform’s customers)” is a quote from the Team Topologies book. It is excellent and doesn’t contradict the platform business way of thinking, but it only calls out one side of the producer/consumer model. This is precisely the trap I fell into. When I worked with platform builders, we focused almost entirely on the application teams that consumed platform services. We rapidly became the blocker to those teams, just like the SRE and DevOps teams that came before us. We couldn’t onboard capabilities and features fast enough, meaning we were supporting the old ways while trying to build the new. ... Chris Plank, Enterprise Architect at NatWest, discusses this in our interview for his Platform Engineering Day talk: “We have since been set four challenges by leadership that I talk about: do things faster, do things simpler, enable inner sourcing, and deliver centralized capabilities in a self-service way… Our inner sourcing model will allow us to have multiple teams working on our platform… They are empowered to start contributing changes.”


Data Centers in Space: Separating Fact from Science Fiction

Among the many reasons for interest in orbital data centers is the potential for improved sustainability. However, the definition of a data center in space remains fluid, shaped by current technological limitations and evolving industry perspectives. Lonestar Data Holdings chairman and CEO Christopher Slott told Data Center Knowledge that his firm works from the definitions of a data center from industry standards bodies including the Uptime Institute and the Building Industry Consulting Service International (BICSI). ... Axiom Space plans to deploy larger ODC infrastructure in the coming years that are more similar to terrestrial data centers in terms of utility and capacity. The goal is to develop and operationalize terrestrial-grade cloud regions in low-Earth orbit (LEO). ... James noted that space presents the ultimate edge computing challenge – limited bandwidth, extreme conditions, and no room for failure. “To ensure resilience and autonomy, the platform incorporates automated rollbacks and self-healing capabilities through delta updates and health monitoring,” James said. ... With the Axiom Space deployment, the initial workloads will be small but scalable to the much larger ODC infrastructure that the company plans to deploy in the coming years. “Red Hat Device Edge enables secure, low-latency data processing directly on the ISS, allowing applications to run where the data is being generated,” James said. 


CISA cybersecurity workforce faces cuts amid shifting US strategy

Analysts suggest these layoffs and funding cuts indicate a broader strategic shift in the U.S. government’s cybersecurity approach. Neil Shah, VP at Counterpoint Research, sees both risks and opportunities in the restructuring. “In the near to mid-term, this could weaken the US cybersecurity infrastructure. However, with AI proliferating, the US government likely has a Plan B — potentially shifting toward privatized cybersecurity infrastructure projects, similar to what we’re seeing with Project Stargate for AI,” Shah said. “If these gaps aren’t filled with viable alternatives, vulnerabilities could escalate from small-scale exploits to large-scale cyber incidents at state or federal levels. Signs point to a broader cybersecurity strategy reboot, with funding likely being redirected toward more efficient and sophisticated players rather than a purely vertical, government-led approach.” While some fear heightened risks, others argue the shift could lead to more tech-driven solutions. Faisal Kawoosa, founder and lead analyst at Techarc, views the move as part of a larger digital transformation. “Elon Musk’s role is not just about cost-cutting but also about leveraging technology to create more efficient systems,” Kawoosa said. “DOGE operates as a digital transformation program for US governance, exploring tech-first approaches to achieving similar or better results.”

Daily Tech Digest - March 12, 2025


Quote for the day:

"People may forget what you say, but they won't forget how you made them feel." -- Mary Kay Ash



Rethinking Firewall and Proxy Management for Enterprise Agility

Firewall and proxy management follows a simple rule: block all ports by default and allow only essential traffic. Recognizing that developers understand their applications best, why not empower them to manage firewall and proxy changes as part of a “shift security left” strategy? In practice, however, tight deadlines often lead developers to implement overly broad connectivity – opening up to the complete internet – with plans to refine later. Temporary fixes, if left unchecked, can evolve into serious vulnerabilities. Every security specialist understands what happens in practice. When deadlines are tight, developers may be tempted to take shortcuts. Instead of figuring out the exact needed IP range, they open connectivity to the entire internet with the intention of fixing this later. ... Periodically auditing firewall and proxy rule sets is essential to maintaining security, but it is not a substitute for a robust approval process. Firewalls and proxies are exposed to external threats, and attackers might exploit misconfigurations before periodic audits catch them. Blocking insecure connections on a firewall when the application is already live requires re-architecting the solution, which is costly and time-consuming. Thus, preventing risky changes must be the priority.


Multicloud: Tips for getting it right

It’s obvious that a multicloud strategy — regardless of what it actually looks like — will further increase complexity. This is simply because each cloud platform works with its own management tools, security protocols and performance metrics. Anyone who wants to integrate multicloud into their IT landscape needs a robust management system that can handle the specific requirements of the different environments while ensuring an overview and control across all platforms. This is necessary not only for reasons of handling and performance but also to be as free as possible when choosing the optimal provider for the respective application scenario. This requires cross-platform technologies and tools. The large hyperscalers do provide interfaces for data exchange with other platforms as standard. ... In general, anyone pursuing a multicloud strategy should take steps in advance to ensure that complexity does not lead to chaos but to more efficient IT processes. Security is one of the main issues. And it is twofold: on the one hand, the networked services must be protected in themselves and within their respective platforms. On the other hand, the entire construct with its various architectures and systems must be secure. It is well known that the interfaces are potential gateways for unwelcome “guests”.


FinOps and AI: A Winning Strategy for Cost-Efficient Growth

FinOps is a management approach focused on shared responsibility for cloud computing infrastructure and related costs. ... Companies are attempting to drink from the AI firehose, and unfortunately, they’re creating AI strategies in real-time as they rush to drive revenue and staff productivity. Ideally, you want a foundation in place before using AI in operations. This should include an emphasis on cost management, resource allocation, and keeping tabs on ROI. This is also the focus of FinOps, which can prevent errors and improve processes to further AI adoption. ... To begin, companies should create a budget and forecast the AI projects they want to take on. This planning is a pillar of FinOps and should accurately assess the total cost of initiatives, emphasizing resource allocation (including staffing) and eliminating billing overruns. Cost optimization can also help identify opportunities and reduce expenses. The new focus on AI services in the cloud could drive scalability and cost efficiency as they are much more sensitive to overruns and inefficient usage. Even if organizations are not implementing AI into end-user workloads, there is still an opportunity to craft internal systems utilizing AI to help identify operational efficiencies and implement cost controls on existing infrastructure.


3 Signs Your Startup Needs a CTO — But Not As a Full-Time Hire

CTO as a service provides businesses with access to experienced technical leadership without the commitment of a full-time hire. This model allows startups to leverage specialized expertise on an as-needed basis. ... An on-demand expert can bridge this gap by offering leadership that goes beyond programming. This model provides access to strategic guidance on technology choices, project architecture and team dynamics. During a growth phase, mistakes in management won't be forgiven. ... Hiring a full-time CTO can strain tight budgets, diverting funds from critical areas like product development and market expansion. However, with the CTO as a service model, companies can access top-tier expertise tailored to their financial capabilities. This flexibility allows startups to engage a tech strategist on a project basis, paying only for the high-quality leadership they need when they need it (and if needed). ... Engaging outsourced expertise offers a viable solution, providing a fresh perspective on existing challenges at a cost that remains accessible, even amid resource constraints. This strategic move allows businesses to tap into a wealth of external knowledge, leveraging insights gained from diverse industry experiences. Such an external viewpoint can be invaluable, especially when navigating complex technical hurdles, ensuring that projects not only survive but thrive. 


How to Turn Developer Team Friction Into a Positive Force

Developer team friction, while often seen as a negative trait, can actually become a positive force under certain conditions, McGinnis says. "Friction can enhance problem-solving abilities by highlighting weaknesses in current processes or solutions," he explains. "It prompts the team to address these issues, thereby improving their overall problem-solving skills." Team friction often occurs when a developer passionately advocates a new approach or solution. ... Friction can easily spiral out of control when retrospectives and feedback focus on individuals instead of addressing issues and problems jointly as a team. "Staying solution-oriented and helping each other achieve collective success for the sake of the team, should always be the No. 1 priority," Miears says. "Make it a safe space." As a leader it's important to empower every team member to speak up, Beck advises. Each team member has a different and unique perspective. "For instance, you could have one brilliant engineer who rarely speaks up, but when they do it’s important that people listen," he says. "At other times, you may have an outspoken member on your team who will speak on every issue and argue for their point, regardless of the situation." 


Enterprise Architecture in the Digital Age: Navigating Challenges and Unleashing Transformative Potential

EA is about crafting a comprehensive, composable, and agile architecture-aligned blueprint that synchronizes an organization’s business processes, workforce, and technology with its strategic vision. Rooted in frameworks like TOGAF, it transcends IT, embedding itself into the very heart of a business. ... In this digital age, EA’s role is more critical than ever. It’s not just about maintaining systems; it’s about equipping organizations—whether agile startups or sprawling, successful enterprises—for the disruptions driven by rapid technological evolution and innovation. ... As we navigate inevitable future complexities, Enterprise Architecture stands as a critical differentiator between organizations that merely survive digital disruption and those that harness it for competitive advantage. The most successful implementations of EA share common characteristics: they integrate technical depth with business acumen, maintain adaptable governance frameworks, and continuously measure impact through concrete metrics. These aren’t abstract benefits—they represent tangible business outcomes that directly impact market position and financial performance. Looking forward, EA will increasingly focus on orchestrating complex ecosystems rather than simply mapping them. 


Generative AI Drives Emphasis on Unstructured Data Security

As organizations pivot their focus, the demand for vendors specializing in security solutions, such as data classification, encryption and access control, tailored to unstructured data is expected to increase. This increased demand reflects the necessity for robust and adaptable security measures that can effectively protect the vast and varied types of unstructured data organizations now manage. In tandem with this shift, the rising significance of unstructured data in driving business value and innovation compels organizations to develop expertise in unstructured data security. ... Organizations should prioritize investment in security controls specifically designed for unstructured data. This includes tools with advanced capabilities such as rapid data classification, entitlement management and unclassified data redaction. Solutions that offer prompt engineering and output filtering can also further enhance data security measures. ... Building a knowledgeable team is crucial for managing unstructured data security. Organizations should invest in staffing, training and development to cultivate expertise in this area. This involves hiring data security professionals with specialized skills and providing ongoing education to ensure they are equipped to handle the unique challenges associated with unstructured data. 


Quantum Pulses Could Help Preserve Qubit Stability, Researchers Report

The researchers used a model of two independent qubits, each interacting with its own environment through a process called pure dephasing. This form of decoherence arises from random fluctuations in the qubit’s surroundings, which gradually disrupt its quantum state. The study analyzed how different configurations of PDD pulses — applying them to one qubit versus both — affected the system’s evolution. By employing mathematical models that calculate the quantum speed limit based on changes in quantum coherence, the team measured the impact of periodic pulses on the system’s stability. When pulses were applied to both qubits, they observed a near-complete suppression of dephasing, while applying pulses to just one qubit provided partial protection. Importantly, the researchers investigated the effects of different pulse frequencies and durations to determine the optimal conditions for coherence preservation. ... While the study presents promising results, the effectiveness of PDD depends on the ability to deliver precise, high-frequency pulses. Practical quantum computing systems must contend with hardware limitations, such as pulse imperfections and operational noise, which could reduce the technique’s efficiency.


Disaster Recovery Plan for DevOps

While developing your disaster recovery Plan for your DevOps stack, it’s worth considering the challenges DevOps face in this view. DevOps ecosystems always have complex architecture, like interconnected pipelines and environments (e., GitHub and Jira integration). Thus, a single failure, whether due to a corrupted artifact or a ransomware attack, can cascade through the entire system. Moreover, the rapid development of DevOps creates constant changes, which can complicate data consistency and integrity checks during the recovery process. Another issue is data retention policies. SaaS tools often impose limited retention periods – usually, they vary from 30 to 365 days. ... your backup solution should allow you to:Automate your backups, by scheduling them with the most appropriate interval between backup copies, so that no data is lost in the event of failure,
Provide long-term or even unlimited retention, which will help you to restore data from any point in time. Apply the 3-2-1 backup rule and ensure replication between all the storages, so that in case one of the backup locations fails, you can run your backup from another one. Ransomware protection, which includes AES encryption with your own encryption key, immutable backups, restore and DR capabilities


The state of ransomware: Fragmented but still potent despite takedowns

“Law enforcement takedowns have disrupted major groups like LockBit but newly formed groups quickly emerge akin to a good old-fashioned game of whack-a-mole,” said Jake Moore, global cybersecurity advisor at ESET. “Double and triple extortion, including data leaks and DDoS threats, are now extremely common, and ransomware-as-a-service models make attacks even easier to launch, even by inexperienced criminals.” Moore added: “Law enforcement agencies have struggled over the years to take control of this growing situation as it is costly and resource heavy to even attempt to take down a major criminal network.” ... Meanwhile, enterprises are taking proactive measures to defend against ransomware attacks. These include implementing zero trust architectures, enhancing endpoint detection and response (EDR) solutions, and conducting regular exercises to improve incident response readiness. Anna Chung, principal researcher at Palo Alto Networks’ Unit 42, told CSO that advanced tools such as next-gen firewalls, immutable backups, and cloud redundancies, while keeping systems regularly patched, can help defend against cyberattacks. Greater use of gen AI technologies by attackers is likely to bring further challenges, Chung warned. 

Daily Tech Digest - March 11, 2025


Quote for the day:

“What seems to us as bitter trials are often blessings in disguise.” -- Oscar Wilde


This new AI benchmark measures how much models lie

Scheming, deception, and alignment faking, when an AI model knowingly pretends to change its values when under duress, are ways AI models undermine their creators and can pose serious safety and security threats. Research shows OpenAI's o1 is especially good at scheming to maintain control of itself, and Claude 3 Opus has demonstrated that it can fake alignment. To clarify, the researchers defined lying as, "(1) making a statement known (or believed) to be false, and (2) intending the receiver to accept the statement as true," as opposed to other false responses, such as hallucinations. The researchers said the industry hasn't had a sufficient method of evaluating honesty in AI models until now. ... "Many benchmarks claiming to measure honesty in fact simply measure accuracy -- the correctness of a model's beliefs -- in disguise," the report said. Benchmarks like TruthfulQA, for example, measure whether a model can generate "plausible-sounding misinformation" but not whether the model intends to deceive, the paper explained. ... "As a result, more capable models can perform better on these benchmarks through broader factual coverage, not necessarily because they refrain from knowingly making false statements," the researchers said. In this way, MASK is the first test to differentiate accuracy and honesty. 


EU looks to tech sovereignty with EuroStack amid trade war

“Software forms the operational core of digital infrastructure, encompassing operating systems, application platforms, and algorithmic frameworks,” the report notes. “It powers critical functions such as identity management, electronic payments, transactions, and document delivery, forming the foundation of digital public infrastructures.” EuroStack could also help empower citizens and businesses through digital identity systems, secure payments and data platforms. It envisions digital IDs as the gateway to Europe’s digital infrastructure and a way to enable seamless access while safeguarding privacy and sovereignty according to EU regulations. “By overcoming the limitations seen in models like India Stack, which rely on centralized biometric IDs and foreign cloud infrastructure, the EuroStack offers a federated, privacy-preserving platform,” the study explains. EuroStack’s ambitious goals to support indigenous technology will require plenty of funds: As much as 300 billion euros (US$324.9 billion) for the next 10 years, according to the study. Chamber of Progress, a tech industry trade group that includes U.S. tech companies, puts the price tag even higher, at 5 trillion euros ($5.4 trillion). But according to EuroStack’s proponents, the results are worth it.


Companies are drowning in high-risk software security debt — and the breach outlook is getting worse

Organizations are taking longer to fix security flaws in their software, and the security debt involved is becoming increasingly critical as a result. According to application security vendor Veracode’s latest State of Software Security report, the average fix time for security flaws has increased from 171 days to 252 days over the past five years. ... Chris Wysopal, co-founder at chief security evangelist at Veracode, told CSO that one aspect of application security that has gotten progressively worse over the years is the time it takes to fix flaws. “There are many reasons for this, but the ever-growing scope and complexity of the software ecosystem is a core issue,” Wysopal said. “Organizations have more applications and vastly more code to keep on top of, and this will only increase as more teams adopt AI for code generation” — an issue compounded by the potential security implications of AI-generated code across in-house software and third-party dependencies alike. ... “Most organizations suffer from fragmented visibility over the software flaws and risks within their applications, with sprawling toolsets that create ‘alert fatigue’ at the same time as silos of data to interpret and make decisions about,” Wysopal said. “The key factors that help them address the security backlog are the ability to prioritize remediation of flaws based on risk.” 


AI Coding Assistants Are Reshaping Engineering — Not Replacing Engineers

The next big leap in AI coding assistants will be when they start learning from how developers work in real time. Right now, AI doesn’t recognize coding patterns within a session. If I perform the same action 10 times in a row, none of the current tools ask, “Do you want me to do this for the next 100 lines?” But Vi and Emacs solved this problem decades ago with macros and automated keystroke reduction. AI coding assistants haven’t even caught up to that efficiency level yet. Eventually, AI assistants might become plugin-based so developers can choose the best AI-powered features for their preferred editor. Deeply integrated IDE experiences will probably offer more functionality, but many developers won’t want to switch IDEs. ... Software engineering is a fast-paced career. Languages, frameworks, and technologies come and go, and the ability to learn and adapt separates those who thrive from those who fall behind. AI coding assistants are another evolution in this cycle. They won’t replace engineers but will change how engineering is done. The key isn’t resisting these tools; it’s learning how to use them properly and staying curious about their capabilities and limitations. Until these tools improve, the best engineers will be the ones who know when to trust AI, when to double-check its output, and how to integrate it into their workflow without becoming dependent on it.


Building generative AI? Get ready for generative UI

Generative UI takes the concept of generative AI and applies it to how we interact with data or systems. Just as generative AI makes data interactive and available in natural language, or creates new images or sound in response to a prompt, so generative UI builds interactive context into how data is displayed, depending on what you are asking for. The goal is to deliver the content that the user wants but also in a format that makes the most of that data for the user too. ... To deliver generative UI, you will have to link up your application with your generative AI components, like your large language model (LLM) and sources of data, and with the tools you use to build the site like Vercel and Next.js. For generative UI, by using React Server Components, you can change the way that you display the output from your LLM service. These components can deliver information that is updated in real time, or is delivered in different ways depending on what formats are best suited to the responses. As you create your application, you will have to think about some of the options that you might want to deliver. As a user asks a question, the generative AI system must understand the request, determine the appropriate function to use, then choose the appropriate React Server Component to display the response back.


Four essential strategies to bolster cyber resilience in critical infrastructure

Cyber resilience isn’t possible when teams operate in silos. In fact, 59% of government leaders report that their inability to synthesize data across people, operations, and finances weakens organizational agility. To bolster cyber resilience, organizations must break down these siloes by fostering cross-departmental collaboration and making it as seamless as possible. Achieving this requires strategic investment in a triad of technologies: A customized, secure collaboration platform; A project management tool like Asana, Trello, or Jira; A knowledge-sharing solution like Confluence or Notion. Once these three foundational tools are in place, organizations should deploy the final piece of the puzzle: a dashboarding or reporting tool. These technologies can help IT leaders pinpoint any silos that exist and start figuring out how to break them down. ... Most organizations understand security’s importance but often treat it as an afterthought. To strengthen cyber resilience, organizations must adopt a security-first mindset, baking security into everything they do. Too often, security teams are siloed from the rest of the organization; they’re roped in at the end when they should be fully integrated from the start. Truly resilient organizations treat security as a shared responsibility, ensuring it’s part of every decision, project, and process. 


Did we all just forget diverse tech teams are successful ones?

The reality is that diverse teams are more productive and report better financial performance. This has been a key advantage of diversity in tech for many years, and it’s continued to this day. Research from McKinsey’s Diversity Matters report showed that those committed to DEI and multi-ethnic representation exhibit a “39% increased likelihood of outperformance” compared to those that aren’t. These same companies also showed an average 27% financial advantage over others. The same performance boosts can be found in executive teams that focus heavily on improving gender diversity, McKinsey found. Companies with representation of women exceeding 30% are “significantly more likely to financially outperform those with 30% or fewer,” the study noted. ... Are you willing to alienate huge talent pools because you want to foster a more ‘masculine’ culture in your company? If you are, then you’re fighting a losing battle and in my opinion deserve to fail. Tech bro culture counts for nothing when that runway comes to an end and you’ve no MVP. Yet again, what this entire debacle comes down to is a highly vocal minority seeking to hamper progress. Big tech might just be going with the flow and pandering to the current prevailing ideological sentiment. In time they might come back around, but that’s what makes it worse.


With critical thinking in decline, IT must rethink application usability

The more IT’s business analysts and developers learn the end business, the better prepared they will be to deliver applications that fit the forms and functions of business processes, and integrate seamlessly into these processes. Part of IT engagement with the business involves understanding business goals and how the business operates, but it’s equally important to understand the skill levels of the employees who will be using the apps. ... The 80/20 rule — i.e., 80% of applications developed are seldom or never used, and 20% are useful — still applies. And it often also applies within that 20% of useful apps, in terms of useful features and functionality. IT must work to ensure what it develops hits a higher target of utility. Users are under constant pressure to do work fast. They meet the challenges by finding ways to do the least possible work per app and may never look at some of the more embedded, complicated, and advanced functionality an app offers. ... Especially in user areas with high turnover, or in other domains that require a moderate to high level of skill, user training and mentoring should be major milestone tasks in every application project, and an ongoing routine after a new application is installed. Business analysts from IT can help with some of this, but the ultimate responsibility falls on non-IT functions, which should have subject matter experts available to mentor and train employees when questions arise.


How digital academies can boost business-ready tech skills for the future

Niche tech skills are becoming essential for complex software projects. With requirements evolving for highly technical roles, there’s a greater need for more competency in using digital tools. Technology professionals need to know how to use the tools effectively and valuably to make meaningful decisions around adoption and implementation. ... In creating links between educational institutions and a hub of tech and digital sector businesses, via digital academies, this can vastly improve how training opportunities can be constructed. Whether an organisation is looking to make digital transformation real and upskill on the tools and technology available, or a person wants to career switch into software development, digital academies can support these skilling or upskilling programmes through training on a range of digital tools. An effective digital academy is one with technical experts in software delivery that design, deliver and assess the courses. An academy such as Headforwards Digital Academy can intensively train a person in deep software engineering, taking them from no-coding knowledge to becoming a junior software developer in as little as 16 weeks. These industry-led tech training programmes are a more agile and nimble response to education, as they are validated by employers and receive so much support. 


Smart cybersecurity spending and how CISOs can invest where it matters

“The most pervasive waste in cybersecurity isn’t from insufficient tools – it’s from investments that aren’t tied to validated risk models. When security spending isn’t part of a closed-loop system that connects real-world threats to measurable outcomes, you’re essentially paying for digital theater rather than actual protection,” Alex Rice, CTO at HackerOne, told Help Net Security. “Many CISOs operate with fragmented security architectures where tools work in isolation, creating dangerous blind spots. As attack surfaces expand across code, AI systems, cloud infrastructure, and traditional IT, this siloed approach isn’t just inefficient – it’s dangerous. Defense in depth requires coordinated visibility across all domains,” Rice added. ... “A HackerOne survey revealed most CISOs don’t find traditional ROI measures useful for security investments. This isn’t surprising – cybersecurity is notoriously difficult to quantify with conventional metrics. More meaningful approaches like Return on Mitigation, which accounts for potential losses prevented, offer a more accurate picture of security’s true business value,” Rice explained. “The uncomfortable truth? We’ve created a tangled ecosystem of point solutions that often disguise rather than address fundamental security gaps. Before purchasing the next shiny tool, ask: Does this solution provide meaningful transparency into your actual security posture? 

Daily Tech Digest - March 10, 2025


Quote for the day:

“You get in life what you have the courage to ask for.” -- Nancy D. Solomon



The Reality of Platform Engineering vs. Common Misconceptions

In theory, the definition of platform engineering is straightforward. It's a practice that involves providing a company's software developers with access to preconfigured toolchains, workflows, and environments, typically through the use of what's called an Internal Developer Platform (IDP). The goal behind platform engineering is also straightforward: It's to help developers work more efficiently and with fewer risks by allowing them to spin up compliant, ready-made solutions whenever they need them, rather than having to implement everything from scratch. ... Misuses of the term platform engineering aren't all that surprising. A similar phenomenon occurred when DevOps entered the tech lexicon in the late 2000s. Instead of universal recognition of DevOps as a distinct philosophy that involves melding software development to IT operations work, some folks effectively began using DevOps as a catch-all term to refer to anything modern or buzzworthy in the realm of software engineering. The same thing seems to be happening now in platform engineering. The term is apparently being used, at least by some professionals, to refer to any work that involves using a platform of some kind within the context of software development.


Why AI needs a kill switch – just in case

How do you develop your “AI kill switch?” The answer lies in protecting securing the entire machine-driven ecosystem that AI depends on. Machine identities, such as digital certificates, access tokens and API keys – authenticate and authorise AI functions and their abilities to interact with and access data sources. Simply put, LLMs and AI systems are built on code, and like any code, they need constant verification to prevent unauthorised access or rogue behaviour. If attackers breach these identities, AI systems can become tools for cybercriminals, capable of generating ransomware, scaling phishing campaigns and sowing general chaos. Machine identity security ensures AI remains trustworthy, even as they scale to interact with complex networks and user bases – tasks that can and will be done autonomously via AI agents. Without strong governance and oversight, companies risk losing visibility into their AI systems, leaving them vulnerable. Attackers can exploit weak security measures, using tactics like data poisoning and backdoor infiltration – threats that are evolving faster than many organisations realise. ... Machine identity security is a critical first step – it establishes trust and resilience in an AI-driven world. This becomes even more urgent as agentic AI takes on autonomous decision-making roles across industries.


Cyber resilience under DORA – are you prepared for the challenge?

Many damaging breaches have originated from within digital supply chains, through third-party vulnerabilities, or from internal weaknesses. In 2023, third-party attacks led to 29% of breaches with 75% of third-party breaches targeting the software and technology supply chain. This evolving threat landscape has forced financial institutions to rethink their approach. The future of cyber resilience isn’t about building higher walls - it’s about securing every layer, inside and out. ... One of the most pressing concerns for financial institutions under DORA is the security of their digital supply chains. High-profile cyberattacks in recent years have demonstrated that vulnerabilities often originate not from within an organization's own IT infrastructure, but through weaknesses in third-party service providers, cloud platforms, and outsourced IT partners. DORA places a strong emphasis on third-party risk management, making it clear that security responsibility extends beyond a firm’s immediate network. Ensuring supply chain resilience requires a proactive and continuous approach. FSIs must conduct regular security assessments of all external vendors, ensuring that partners adhere to the same high standards of cybersecurity and risk management. 


Ask a Data Ethicist: How Can We Ethically Assess the Influence of AI Systems on Humans?

Bezou-Vrakatseli et al provides some guidance in this paper, which outlines the S.H.A.P.E. framework. S.H.A.P.E. stands for secrecy, harm, agency, privacy, and exogeneity. ... If you are not aware that you are being influenced or are unaware of the way in which the influence is taking place, there might be an ethical issue. The idea of intent to influence while keeping that intent a secret, speaks to ideas of deception or trickery. ... You might be wondering – what actually constitutes harm? It’s not just physical harm. There are a range of possible harms including mental health and well being, psychological safety, and representational harms. The authors note that this issue of what is harm – ethically speaking – is contestable, and that lack of consensus can make it difficult to address. ... Human agency has “intrinsic moral value” – that is to say we value it in and of itself. Thus, anything that messes with human agency is generally seen as unethical. There can be exceptions, and we sometimes make these when the human in question might not be able to act in their own best interests. ... Influence may be unethical if there is a violation of privacy. Much has been written about why privacy is valuable and why breaches of privacy are an ethical issue. The authors cite the following – limiting surveillance of citizens, restricting access to certain information, and curtailing intrusions into places deemed private or personal.


Is It Time to Replace Your Server Room with a Data Center?

Rare is the business that starts its IT journey with a full-fledged data center. The more typical route involves creating a server room first, then upgrading to a data center over time as IT needs expand. That raises the question: When should a business replace its server room with a data center? Which performance, security, cost and other considerations should a company weigh when deciding to switch? ... For some companies, the choice between a server room and a data center is clear-cut. A server room best serves small businesses without large-scale IT needs, whereas enterprises typically need a “real” data center. For medium-sized companies, the choice is often less clear. If a business has been getting by for years with just a server room, there is often no single tell-tale sign indicating it’s time to upgrade to a data center. And there is a risk that doing so will cost a lot of money without being necessary. ... A high incidence of server outages or downtime is another good reason to consider moving to a data center. That’s especially true if the outages stem from issues inherent to the nature of the server room – such as power system failures within the entire building, which are less of a risk inside a data center with its own dedicated power source.


How to safely dispose of old tech without leaving a security risk

Printers, especially those with built-in memory or hard drives, can retain copies of documents that were printed or scanned. Routers can store personal information related to network activity, including IP addresses, usernames, and Wi-Fi passwords. Meanwhile, smart TVs, home assistants (like Alexa, Google Home), and smart thermostats may store voice recordings, usage patterns, personal preferences, and even login credentials for streaming services like Netflix and Amazon Prime. As IoT devices become more common, they are increasingly at risk of storing sensitive data. ... Before disposing of a device, it’s essential to completely erase any confidential data. Deleting files or formatting the drive alone isn’t enough, as the data can still be retrieved. The best method for securely wiping data varies depending on the device. ... Windows users can use the “Reset this PC” feature with the option to remove all files and clean the drive, while macOS users can use “Erase Disk” in Disk Utility to securely wipe storage before disposal. Tools like DBAN (Darik’s Boot and Nuke) and BleachBit can also help securely erase data. DBAN is specifically designed to wipe traditional hard drives (HDDs) by completely erasing all stored data. However, it does not support solid-state drives (SSDs), as excessive overwriting can shorten their lifespan.


The great software rewiring: AI isn’t just eating everything; it is everything

Right now, most large language models (LLMs) feel like a Swiss Army knife with infinite tools — exciting but overwhelming. Users don’t want to “figure out” AI. They want solutions, AI agents tailored for specific industries and workflows. Think: legal AI drafting contracts, financial AI managing investments, creative AI generating content, scientific AI accelerating research. Broad AI is interesting. Vertical AI is valuable. Right now, LLMs are too broad, too abstract, too unapproachable for most. A blank chat box is not a product, it is homework. If AI is going to replace applications, it must become invisible, integrating seamlessly into daily workflows without forcing users to think about prompts, settings or backend capabilities. The companies that succeed in this next wave will not just build better AI models, but better AI experiences. The future of computing is not about one AI that does everything. It is about many specialized AI systems that know exactly what users need and execute on that flawlessly. ... The old software model was built on scarcity. Control distribution, limit access, charge premiums. AI obliterates this. The new model is fluid, frictionless,and infinitely scalable.


Cybersecurity: The “What”, the “How” and the “Who” of Change

Cybersecurity is more complex than that: Protecting the firm from cyberthreats requires the ability to reach across corporate silos, beyond IT, towards business and support functions, as well as digitalised supply chains. You can throw as much money as you like to the problem, but if you give it to a technologist CISO to resolve, they will address it as a technology matter. They will put ticks on compliance checklists. They will close down audit points. They will deal with incidents and put out fires. They will deploy countless tools (to the point where this is now becoming a major operational issue). But they will not change the culture of your organisation around business protection and breaches will continue to happen as threats evolve. A lot has been said and written about the role of the “transformational CISO”, but I doubt there are many practitioners in the current generation of CISOs who can successfully wear that mantel. Simply because most have spent the last decade firefighting cyber incidents and have never been able to project a transformative vision over the mid to long-term, let alone deliver it. They have not developed the type of political finesse, of personal gravitas, of leadership in one word, that they would require to be trusted and succeed at delivering a truly transformative agenda across the complex and political silos of the modern enterprise.


CISOs and CIOs forge vital partnerships for business success

“One of the characteristics of a business-aligned CISO is they don’t use the veto card in every instance,” Ijam explains. “When the CISO is at the table and understands the importance of outcomes and deliverables from a business perspective as well as risk management from a security perspective, they are able to pick their battles in a smart way.” Forging a peer CIO/CISO partnership also requires the right set of leaders. While CIOs have been honing a business orientation for years, CISOs need to follow suit, maturing into a role that understands business strategy and is well-versed in the language so they command a seat at the table. “The right CISO leader is someone that doesn’t speak in ones and zeros,” Whiteside says. “They need to be at the table talking in terms that business leaders understand — not about firewalls and malware.” Becoming a C-suite peer also means cultivating an independent voice — important because CIOs and CISOs often have varying points of view, separate priorities, and different tolerances for risk. It’s equally important to make sure the CISO’s voice — and security recommendations — are part of every discussion related to business strategy, IT infrastructure, and critical systems at the beginning, not as an afterthought.


India’s Digital Personal Data Protection Act: A bold step with unfinished business

The release of the draft Digital Personal Data Protection Rules, 2025, on 3rd of January aim to operationalise the provisions of the Act. The Act will undoubtedly go a long way in safeguarding digital personal data. Whilst the benefits to the common citizen are laudable, there are clearly areas of that need to be urgently addressed. ... The draft rules mandate data localisation, restricting the transfer of certain personal data outside India. This approach has faced criticism for potentially increasing operational costs for businesses and creating barriers to global data flows. A flexible approach could be taken with regard to data flows with Friendly and Trusted Nations. Allowing cross-border data transfers to trusted jurisdictions with robust data protection frameworks will position India as a key player in Global trade. India wants to increase exports of goods and services to achieve it’s vision of “Viksit Bharat” by 2047. ... The introduction of clear, technology-driven mechanisms for age verification without being overly intrusive need to be determined. Implementing this rule from a pragmatic perspective will be onerous. Self- declaration may turn out to be a potential way forward, given India’s massive rural population that accesses online services and platforms and the difficulty of implementing parental consent.

Daily Tech Digest - March 09, 2025


Quote for the day:

"Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor


Software Development Teams Struggle as Security Debt Reaches Critical Levels

Software development teams face mounting challenges as security vulnerabilities pile up faster than they can be fixed. That's the key finding of Veracode's 15th annual State of Software Security (SoSS) report. ... According to Wysopal, several factors contribute to this prolonged remediation timeline. Growing Codebases and Complexity: As applications become larger and incorporate more third-party components, the scope for potential flaws increases, making it more time-consuming to isolate and remediate issues. Shifting Priorities: Many teams are under pressure to roll out new features rapidly. Security fixes are often deprioritized unless they are absolutely critical. Distributed Architectures: Modern microservices and container-based deployments can fragment responsibility and visibility. Coordinating fixes across multiple teams prolongs remediation. Shortage of Skilled AppSec Staff: Finding developers or security specialists with both security expertise and domain knowledge is challenging. Limited capacity can push out or delay fix timelines. ... "Many are using AI to speed up development processes and write code, which presents great risk," Wysopal said. "AI-generated code can introduce more flaws at greater velocity, unless they are thoroughly reviewed."


Want to win in the age of AI? You can either build it or build your business with it

From a business perspective, generative AI cannot operate in a technical vacuum -- AI-savvy subject matter experts are needed to adapt the technology to specific business requirements -- that's the domain expertise career track. "As AI models become more commoditized, specialized domain knowledge becomes increasingly valuable," Challapally said. "What sets true experts apart is their deep understanding of their specific industry combined with the ability to identify where and how gen AI can be effectively applied within it." Often, he warned, bots alone cannot relay such specific knowledge. ... Business leaders cite the most intense need at this time "is for professionals who bridge both worlds -- those who deeply understand business requirements while also grasping the technical fundamentals of AI," he said. Rather than pure technologists, they seek individuals who combine traditional business acumen with technical literacy. These are the type of people who can craft product visions, understand basic coding concepts, and gather sophisticated requirements that align technology capabilities with business goals." For those on the technical side, it's important "to master the art of prompting these tools to deliver accurate results," said Challapally. 


Cyber Resilience Needs an Innovative Approach: Streamlining Incident Response for The Future

Incident response has historically been a reactive process, often hampered by time-consuming manual procedures and a lack of historical and real-time visibility. When a breach is detected, security teams scramble to piece together what happened, often working with fragmented information from multiple sources. This approach is not only slow but also prone to errors, leading to extended downtime, increased costs, and sometimes, the loss of crucial data. ... The quicker an enterprise or MSSP organization can respond to an incident, the lower the risk of disruption and the less damage it incurs. An innovative approach that automates and streamlines the collection and analysis of data in near real-time during a breach allows security teams to quickly understand the scope and impact, enabling faster decision-making and minimizing downtime. ... Automation reduces the risk of human error, which is often a significant factor in traditional incident response processes – riddled with fragmented methodologies. By centralizing and correlating data from multiple sources, an automated investgation system provides a more accurate, consistent and comprehensive view of the incident, leading to better informed, more effective containment and remediation efforts.


Data Is Risky Business: Is Data Governance Failing? Or Are We Failing Data Governance?

“Data governance” has become synonymous in some areas of academic study and industry publication with the development of legislation, regulation, and standards setting out rules and common requirements for how data should be processed or put to use. It is also still considered synonymous with or a sub-category of IT Governance in much of the academic literature. And let’s not forget our friends in records and information management and their offshoot of data governance. ... While there is extensive discussion in academia and in practitioner literature about the need for people to lead on data and the importance of people performing data stewardship-type roles, there is nothing that has dug deeper to identify what we mean by “the right people.” ... In the organizations of today, however, we are dealing with business leadership and technology leadership for whom these topics simply did not exist when they were engaged in study before entering the workforce. Therefore, they operate within the mode of thinking and apply the mental models that were taught to them, or which have dominated the cultures of the organizations where they have cut their teeth and the roles they have had as they moved from entry-level to management functions to leadership roles.


How CISOs Will Navigate The Threat Landscape Differently In 2025

In 2025, resilience is the cornerstone of effective cybersecurity. The shift from a defensive mindset to a proactive approach is evident in strategies such as advanced attack surface analytics, continuous threat modeling and offensive security testing. I’ve seen many penetration testing as a service (PTaaS) providers place an emphasis on integrating continuous penetration testing with attack surface management (ASM) as an example of how organizations can stay one step ahead of adversaries. Organizations using continuous pentesting reported 30% fewer breaches in 2024 compared to those relying solely on annual assessments, showcasing the value of a proactive approach. The adoption of cybersecurity frameworks such as NIST and ISO 27001 provides a structured approach to managing risks, but these frameworks must be tailored to the unique needs of each enterprise. For example, enterprises operating in regulated industries such as healthcare, finance and critical infrastructure must prioritize compliance while addressing sector-specific vulnerabilities. CISOs are focusing on data-driven decision making to quantify risks and justify investments. By tying cybersecurity initiatives to financial outcomes, such as reduced downtime and lower breach costs, CISOs can secure buy-in from stakeholders and ensure long-term sustainability.


AI and Automation: Key Pillars for Building Cyber Resilience

AI is now moving from training to inference, helping you quickly make sense of or create a plan from the information you have. This is made possible based on improvements to how AI understands massive amounts of semi-structured data. New AI can figure out the signal from the noise, a critical step in framing the cyber resilience problem. The power of AI as a programming language combined with its ability to ingest semi-structured data opens up a new world of network operations use cases. AI becomes an intelligent helpline, using the criteria you feed it to provide guidance to troubleshoot, remediate, or resolve a network security or availability problem. You get a resolution in hours or days – not the weeks or months it would have taken to do it manually. ... AI is not the same as automation; instead, it enhances automation by significantly speeding up iteration, learning, and problem-solving processes. New AI allows you to understand the entire scope of a problem before you automate and then automate strategically. Instead of learning on the job – when you have a cyber resilience challenge, and the clock is ticking – you improve your chances of getting it right the first time. As the effectiveness of network automation increases, so too will its adoption. 


Adaptive Cybersecurity: Strategies for a Resilient Cyberspace

We are led to consider ‘systems thinking’ to address cyber risk. This approach examines how all the systems we oversee interact on a larger scale, uncovering valuable insights to quantify and mitigate cyber risk. This perspective encourages a paradigm shift and rethinking of traditional risk management practices, emphasizing the need for a more integrated and holistic approach. The evolving and sophisticated cyber risk has heightened both awareness and expectations around cybersecurity. Nowadays, businesses are being evaluated based on their preparedness, resilience and how effectively they respond to cyber risk. Moreover, it's crucial for companies to understand their disclosure obligations across market and industry levels. Consequently, regulators and investors demand that boards prioritize cybersecurity through strong governance. ... The CISO's role has evolved to include viewing cybersecurity not merely as an IT issue but as a strategic and business risk. This shift demands that CISOs possess a combination of technical expertise and strong communication skills, enabling them to bridge the gap between technology and business leaders. They should leverage predictive analytics or AI-based threat detection tools to proactively manage emerging cyber risks. 


Choosing Manual or Auto-Instrumentation for Mobile Observability

Mobile apps run on specific devices and operating systems, which means that certain operations are standard across every app instance. For example, in an iOS app built on UIKit, the didFinishLaunchingWithOptions method informs the app developer that a freshly launched app is almost ready to run. Listening for this method in any app would in turn let you observe and learn more about the completion of app launch automatically. Quick, out-of-the-box instrumentation like this is easy to use. By importing an auto-instrumentation library to your app, you can hook into the activity of your application without writing custom code. Using auto-instrumentation provides standardized signals for actions that should be recognized in a prescribed way. You could listen for app launch, as described above, but also for the loading of views, for the beginning and ends of network requests, crashes and so on. Observability would be great if imported libraries did all the work. ... However, making sense of your mobile app requires more than just monitoring the ubiquitous signals of mobile app development. For one, mobile telemetry collection and transmission can be limited by the operating system that the app user chooses, which is not designed to transmit every signal of its own. 


Planning ahead around data migrations

Understanding the full inventory of components involved in the data migration is crucial. However, it is equally essential to have a clearly defined target and to communicate this target to all stakeholders. This includes outlining the potential implications of the migration for each stakeholder. The impact of the migration will vary significantly depending on the nature of the project. For example, a simple infrastructure refresh will have a much smaller impact than a complete overhaul of the database technology. In the case of an infrastructure refresh, the primary impact might be a brief period of downtime while the new hardware is installed and the data is transferred. Stakeholders may need to adjust their workflows to accommodate this downtime, but the overall impact on their day-to-day operations should be minimal. On the other hand, a complete change of database technology could have far-reaching implications. Stakeholders may need to learn new skills to interact with the new database, and existing applications may need to be modified or even completely rewritten to be compatible with the new technology. This could result in a significant investment of time and resources, and there may be a period of adjustment while everyone gets used to the new system.


Your AI coder is writing tomorrow’s technical debt

With AI, this problem gets exponentially worse. Let’s say a machine writes a million lines of code – it can hold all of that in its head and figure things out. But a human? Even if you wanted to address a problem, you couldn’t do so. It’s impossible to sift through all that amount of code you’ve never seen before just to find where the problem might be. In our case, what made it particularly tricky was that the AI-generated code had these very subtle logical flaws: not even syntactic issues, just small problems in the execution logic that you wouldn’t notice at a glance. The volume of technical debt increases not just because of complexity, but simply because of the sheer amount of code being shipped. It’s a natural law. Even as humans, if you ship more code, you will have more bugs and you will have more debt. If you are exponentially increasing the amount of code you’re shipping with AI, then yes, maybe you catch some issues during review, but what slips through just gets shipped. The volume itself becomes the problem. ... the solution lies in far better communication throughout the whole organisation, coupled with robust processes and tooling. ... The tooling side is equally important. We’ve customised our AI tools’ settings to align with our tech stack and standards. Things like prompt templates that enforce our coding style, pre-configured with our preferred libraries and frameworks.