Daily Tech Digest - December 06, 2024

Preparing for AI-Augmented Software Engineering

AI-augmented approaches will free software engineers to focus on tasks that require critical thinking and creativity, predicts John Robert, deputy director of the software solutions division of the Carnegie Mellon University Software Engineering Institute. "A key potential benefit that excites most enthusiasts of AI-augmented software engineering approaches is efficiency -- the ability to develop more code in less time and lower the barrier to entry for some tasks." Teaming humans and AI will shift the attention of humans to the conceptual tasks that computers aren't good at while reducing human error from tasks where AI can help, he observes in an email interview. ... Hall notes that GenAI can access vast amounts of data to analyze market trends, current user behavior, customer feedback, and usage data to help identify key features that are in high demand and have the potential to deliver significant value to users. "Once features are described and prioritized, multiple agents can create the software program's components." This approach breaks down big tasks into multiple activities with an overall architecture. "It truly changes how we solve complex issues and apply technology."


Code Busters: Are Ghost Engineers Haunting DevOps Productivity?

The assertion here is that almost 10% of software application developers do effectively nothing all day, or indeed all week. For wider clarification, the remote worker segment has more outlier positive performers, but in-office workers exhibit a higher average performance overall. ... “Many ghost engineers I’ve talked to share a common story, i.e. they become disengaged due to frustration or loss of motivation in their roles. Over time, they may test the limits of how much effort they can reduce without consequence. This gradual disengagement often results in them turning into ghosts; originally not out of malice, but as a by-product of their work environment.” He says that managers want to build high-performing teams but face conflicting incentives. A poorly performing team reflects badly on its leadership, leading some to downplay problems rather than address them head-on. Additionally, organizational politics may discourage reducing team sizes, even when smaller, more focused teams could be more effective. ... “There’s also the fact that senior leaders are often further removed from day-to-day operations. Their decisions are based on trust in middle management or flawed metrics, such as lines of code or commit counts. They, too, are sometimes not incentivized to reduce team sizes or deeply investigate performance issues, as their focus tends to be on higher-level strategic outcomes,” said Denisov-Blanch.


Why Data Centers Must Strengthen Network Resiliency In The Age of AI

If a network outage occurs, there will be widespread disruptions, negatively affecting businesses globally. In particular, network outages will compromise the accessibility of AI applications, the very thing data centers scaled to support. Outages—and even reduced performance—carry significant risks, both financial and reputational. Data centers must therefore adopt network solutions, like Failover to Cellular and out-of-band (OOB) management, to ensure AI services remain accessible amid disruptions to normal operations. ... OOB management capabilities and Failover to Cellular integration lay a solid foundation for network resilience. However, data centers don’t need to stop there. AI integrations promise further enhancements, elevating these tools to the next level through advanced intelligence and automation. While it may seem odd to use AI when the extra stress on data centers today comes from increased AI usage, the advanced capabilities and accompanying benefits of this technology speak for themselves. AI’s ability to analyze patterns allows it to detect connectivity issues that could cause failures. When combined with Failover to Cellular, for example, AI orchestrates a seamless Failover to Cellular backup, especially during peak traffic. AI can also automatically take proactive measures like predictive maintenance or rerouting traffic, reducing downtime and improving resilience.


Financial services need digital identity stitched together, investors take note

Financial institutions are all looking for a low friction, high accuracy way of authenticating customers, prospects and business partners that also keeps regulators happy. Some of the approaches and techniques used by established players in the digital identity market have achieved good volume and scale, and newer innovative methods are still proving themselves. Byunn highlights the opportunity in a third layer that’s “all about how you stitch these things together, because so far no one has produced a single solution that addresses everything.” This layer, he says, includes both “orchestration” and elements of holistic scoring (heuristics etc.) “that are not fully covered by what the market calls orchestration.” Earlier waves of technology serving financial services companies were thoroughly penetrated by fraudsters, and in some cases offered poor user experience, Byunn says. One example of this, knowledge-based authentication, remains “shockingly still prevalent in the industry.” ... The threat of deepfakes to financial service institutions seems to be commonly overstated at this time, according to Byunn, at least in part because conventional wisdom is also somewhat underestimating the effectiveness of market leaders’ defense against genAI and deepfakes. However, he notes that the threat has the potential to grow significantly.


The world is running short of copper - telecoms networks could be the answer

Copper remains foundational in older telecom networks, particularly in Europe and North America, with incumbent operators like AT&T, Orange, and BT. However, networks are actively transitioning from copper to fiber optics particularly with ‘last mile connectivity’ and the replacement of infrastructure like Public Switched Telephone Networks (PSTN). While recycling from these sources may not completely plug the 20 percent gap in supply, it can go a long way. It almost goes without saying, that precious metals reclaimed this way have far less environmental impact - around 15 times less. Purchasing copper from these sources is still often cheaper than mining it. ... Over the next eight to ten years, an estimated 800,000 tons of copper could be extracted from telecom networks as part of the global shift to fiber optics. ... Unlocking the value of reclaimed copper is both an environmental and strategic win, especially with the soaring demand for this vital resource. Through effective partnerships and advanced material recovery processes, telecom companies can transform what was once surplus to requirements into a valuable asset. Extracted copper can re-enter the supply chain, supporting the broader green transition and reducing reliance on new mining operations. 


8 biggest cybersecurity threats manufacturers face

The manufacturing sector’s rapid digital transformation, complex supply chains, and reliance on third-party vendors make for a challenging cyber threat environment for CISOs. Manufacturers — often prime targets for state-sponsored malicious actors and ransomware gangs — face the difficult task of maintaining cost-effective operations while modernizing their network infrastructure. “Many manufacturing systems rely on outdated technology that lacks modern security measures, creating exploitable vulnerabilities,” says Paul Cragg, CTO at managed security services firm NormCyber. “This is exacerbated by the integration of industrial internet of things [IIoT] devices, which expand the attack surface.” ... “While industries like chemicals and semiconductors exhibit relatively higher cybersecurity maturity, others, such as food and beverage or textiles, lag significantly,” Belal says. “Even within advanced sectors, inconsistencies persist across organizations.” Operational technology systems — which may include complex robotics and automation components — are typically replaced far more slowly than components of IT networks are, contributing to the growing security debt that many manufacturers carry.


What is a data scientist? A key data analytics role and a lucrative career

Data scientists often work with data analysts, but their roles differ considerably. Data scientists are often engaged in long-term research and prediction, while data analysts seek to support business leaders in making tactical decisions through reporting and ad hoc queries aimed at describing the current state of reality for their organizations based on present and historical data. So the difference between the work of data analysts and that of data scientists often comes down to timescale. A data analyst might help an organization better understand how its customers use its product in the present moment, whereas a data scientist might use insights generated from that data analysis to help design a new product that anticipates future customer needs. ... Data scientists need to manipulate data, implement algorithms, and automate tasks, and proficiency in programming is essential. Van Loon notes that critical languages include Python, R, and SQL. ... They need a strong foundation in both to analyze data accurately and make informed decisions. They also need to understand statistical tests, distributions, likelihoods, and concepts such as hypothesis testing, regression analysis, and Bayesian inference. 


How Active Archives Address AI’s Growing Energy and Storage Demands

Archives were once considered repositories of data that would only be accessed occasionally, if at all. The advent of modern AI has changed the equation. Almost all enterprise data could be valuable if made available to an AI engine. Therefore, many enterprises are turning to archiving to gather organizational data in one place and make it available for AI and GenAI tools to access. Massive data archives can be stored in an active archive at a cost-efficient price and at very low energy consumption levels, all while keeping that data readily available on the network. Decades of archived data can then be analyzed as part of an LLM or other machine learning or deep learning algorithm. ... An intelligent data management software layer is the foundation of an active archive. This software layer plays a vital role in automatically moving data according to user-defined policies to where it belongs for cost, performance, and workload priorities. High-value data that is often accessed can be retained in memory. Other data can reside on SSDs, lower tiers of disks, and within a tape- or cloud-based active archive. This allows AI applications to mine all that data without being subjected to delays due to content being stored offsite or having to be transferred to where AI can process it.


The Growing Importance of AI Governance

The goal of AI governance is to ensure that the benefits of machine learning algorithms and other forms of artificial intelligence are available to everyone in a fair and equitable manner. AI governance is intended to promote the ethical application of the technology so that its use is transparent, safe, private, accountable, and free of bias. To be effective, AI governance must bring together government agencies, researchers, system designers, industry organizations, and public interest groups. ... The long-term success of AI depends on gaining public trust as much as it does on the technical capabilities of AI systems. In response to the potential threats posed by artificial intelligence, the U.S. Office of Science and Technology Policy (OSTP) has issued a Blueprint for an AI Bill of Rights that’s intended to serve as “a guide for a society that protects all people” from misuse of the technology. ... As AI systems become more powerful and complex, businesses and regulatory agencies face two formidable obstacles: The complexity of the systems requires rule-making by technologists rather than politicians, bureaucrats, and judges. The thorniest issues in AI governance involve value-based decisions rather than purely technical ones.


The Role of AI in Cybersecurity: 5 Trends to Watch in 2025

The integration of AI into Software-as-As-Service (SaaS) platforms is changing how businesses manage security. For example, AI-enhanced tools are helping organizations automate threat detection, analyze vast data sets more efficiently, and respond to breaches or incidents more quickly. However, this innovation also introduces new risks such as hallucinations and an over-reliance on potentially poor data quality, meaning AI-powered systems need to be carefully configured to avoid outputs that mislead and are disadvantageous to defenders. ... AI auditing tools will help organizations assess whether AI models are making decisions based on biased or discriminatory data – a concern that could lead to legal and reputational challenges. As AI technology becomes more embedded in organizational operations, ethical considerations must be at the forefront of AI governance to help businesses avoid unintended consequences. Board members must be proactive in understanding the implications of AI on data security and ensuring that their companies are following best practices in AI governance for compliance with evolving legislation. Without C-suite support and understanding, and collaboration between executives and security teams, organizations will be more vulnerable to the potential risks AI poses to data and intellectual property.



Quote for the day:

"Leadership is about making others better as a result of your presence and making sure that impact lasts in your absence." -- Sheryl Sandberg

Daily Tech Digest - December 05, 2024

Fintech Partnership Streamlines Banking Data Integrations

“We’re on the brink of enabling non-programmers to build integrations with minimal effort,” Skye Isard, Sandbox Banking co-founder and CTO, told The New Stack. “AI-driven tools can automate the creation of logic for integrations, seriously reducing the complexity and time required to deploy new workflows. “AI is empowering ‘citizen developers‘ — individuals without coding expertise — to create automations and integrations, further democratizing access to technology. AI allows us to leverage our vast library of API documentation and prebuilt integrations to create even more intelligent and automated solutions. We envision a future where AI can generate integration logic, making it easier for non-programmers to build and deploy integrations.” ... Given the sensitive nature of banking data, Sandbox Banking prioritizes security, Isard said. Its platform adheres to stringent compliance standards, including SOC2 audits, recurring penetration testing and advanced encryption protocols. Data persistence is minimized, with live databases retaining sensitive information for no more than 14 days. These measures ensure that Sandbox Banking’s solutions not only improve efficiency but also meet high standards of data protection and privacy, Isard said.


Dear CEO: It’s time to rethink security leadership and empower your CISO

The stakes have never been higher. Every week, another breach makes headlines, costing millions in losses, irreparable damage to reputations, and a wave of uncertainty that ripples through customers and stakeholders alike. But consider this: Who is truly liable when things go wrong? You might assume the CISO holds the liability, but if they aren’t empowered with the authority, resources, and support to act effectively, can we honestly place the blame there? ... Giving the CISO a seat at the table isn’t a symbolic gesture — it’s a practical necessity. It allows us to align security strategies with business goals, identify risks before they become roadblocks, and ensure that opportunities are pursued without unnecessary exposure. When CISOs are integrated into the executive team, they’re not just protecting the business; they’re enabling it to grow with confidence. That said, some CEOs reading this may not have this type of CISO in their organization today. If that’s the case, it’s worth asking why. Is the person in the CISO seat there to simply tick a box? If so, that’s a recipe for disaster. The No. 1 core competency a CISO should possess is leadership — the ability to inspire, align, and drive a security strategy that supports and advances the business.


What are AI agents and why are they now so pervasive?

Agentic AI operates in two key ways. First, it offers specialized agents capable of autonomously completing tasks across the open web, in mobile apps, or as an operating system. A specific type of agentic AI, called conversational web agents, functions much like chatbots. In this case, the agentic AI engages users through multimodal conversations, extending beyond simple text chats to accompany them as they navigate the open web or use apps ... AI agents are already showing up in places you might not expect. For example, most self-driving vehicles today use sensors to collect data about their surroundings, which is then processed by AI agentic software to create a map and navigate the vehicle. AI agents play several other critical roles in autonomous vehicle route optimization, traffic management, and real-time decision-making — they can even predict when a vehicle needs maintenance. ... Notably, AI agents also have the ability to remember past interactions and behaviors. They can store those experiences and even perform “self-reflection” or evaluation to inform future actions, according to IDC. ... And while most agentic AI tools claim to be safe and secure, a lot depends on the information sources they use. That’s because the source of data used by the agents could vary — from more limited corporate data to the wide open internet.


Failover vs. Failback: Two Disaster Recovery Methods

Failover is critical in a business continuity event because it keeps operations running. By having a system to which your business can transition when a primary system is unavailable, you're able to continue doing business. People can work, revenue streams are preserved, and customers can be served. Without failover, these functions could grind to a halt, leading to significant disruption. Many organizations depend on technology for critical processes, and when those processes are unavailable, analog alternatives may be insufficient or entirely obsolete. Failover ensures that even in a disaster, the business keeps moving. Failback comes into play once the need for failover ends. As the disaster is resolved, failback allows the organization to return to normal operations. Typically, failback is necessary when the standby system cannot sustain operations as effectively as the primary system. For instance, a standby system may not be a full replica of the primary system and might be designed only for temporary use during an emergency. In an ideal world, every business would maintain two fully operational environments: a primary environment and an identical standby environment. This setup would allow for seamless transitions during disasters, ensuring that business operations are completely unaffected.


Burnout: A chronic epidemic in the IT industry

For IT leaders aware of the impact burnout can have on their staff, the reality of exhaustion in IT and tech is further complicated by the fact that burnout isn’t caused by just one thing. It’s a problem that builds slowly over time, leading to disengaged and unmotivated employees with one foot out the door. It can be hard to spot, too. ... Another contributing factor to burnout is the rapid adoption of AI, which has left a lot of workers feeling overwhelmed by keeping up with the latest industry trends. While it’s often touted as a productivity booster, 85% of IT leaders plan to make AI technology mandatory or encourage it’s use in the coming year, which is increasing pressure on workers to upskill, according to Upwork. In fact, 77% of employees said AI has added to their workloads, rather than relieved their daily responsibilities. Cybersecurity professionals feel the pressure of AI, too, with 42% reporting they have concerns about AI-powered attacks, according to BlackFog. To help combat this, 41% also say they need bigger budgets for security tools, along with more support to alleviate pressure to keep the organization safe. Burnout can lead to dangerous results when it comes to security as 63% of respondents said their team experiences alert fatigue, which desensitizes them to the urgency of security events.


Why Banks Need Flexible Tech Architecture — and How to Build It

To operate and launch the banking experience of the future, banks and credit unions must implement “MACH” and “composable” technologies that allow digital teams to take advantage of future-proofed, in-the-moment innovations.Composable technology stresses a modular approach that enables organizations to obtain the best options for their needs in each aspect of their tech, all options working together regardless of the source. MACH — I’ll get into the details below — is an approach to achieving composability. ... Considered a more modern approach, MACH is a standard way of building technology that enables organizations to develop a flexible enterprise tech stack in which each component is modular, scalable and easily replaced. MACH technologies must be microservices-based, API-first, cloud-native SaaS, and “headless,” in which the customer’s front-end digital experience is decoupled from the back-end programming. Companies that leverage a composable approach using MACH principles can prepare for future innovation through a more resilient and modern tech stack. ... The advantage of a MACH architecture includes being able to select modular, best-of-breed solutions to integrate into the overall tech stack, while ensuring each of the pieces work together seamlessly. 


Analysing Linus Torvald’s Critique of Docker

First off, we ditch the shared-kernel approach entirely. We need to build a micro-hypervisor model, where each container runs its own minimal kernel. This ensures that every container is genuinely isolated, similar to a lightweight VM but without the bloat. By employing a microkernel architecture, you’re essentially granting each container its own mini-OS that only loads essential components, drastically reducing the attack surface. This step eliminates the primary flaw of Docker’s shared-kernel model. Next, leverage hardware-assisted virtualisation like Intel VT-x or AMD-V to handle isolation efficiently. This is where we’ll differentiate ourselves from Docker’s reliance on namespaces. With hardware support, each container will get near-native performance while maintaining strict separation. For example, instead of binding everything to a Linux kernel, containers will interact directly with hardware-level isolation, meaning exploits won’t have the chance to jump from one container to another. We can’t ignore orchestration. Rather than bolting on security later, build an orchestration layer that enforces strict security policies from the get-go. This orchestration tool, think Kubernetes but with security baked in, will enforce seccomp, AppArmor, and SELinux profiles automatically based on container configurations. 


Leaders must balance humility with inspiration to foster a culture of curiosity and courage

It is important how do we, as leaders, show up. The second is culture. What kind of culture do we create as leaders? Fostering an environment that encourages adaptability, resilience, and openness to change, rather than rigidity or resistance. And, the third important factor is the system. What kind of systems do we establish to continuously adopt and adapt to change, ensuring the organization remains flexible and forward-looking? To inspire collaboration and trust among the team, Divya sees humility as a crucial factor. Leaders must first acknowledge that they don’t have all the answers. “When leaders demonstrate vulnerability, team members are more likely to step forward with their knowledge and ideas.” Citing the example of leading by example, she mentioned how her current CFO attended a two-month machine learning course at the London School of Economics, signaling his willingness to learn and adapt. This motivated the entire organisation to upskill and embrace new technologies. Creating the right culture is the next step. Leaders must foster curiosity by rewarding those who explore new knowledge and share their insights. For example, celebrating a retail employee who transitioned into data analytics inspires others to follow suit.


How to Keep IT Team Boredom From Killing Productivity

A bored IT team is a ticking time bomb, Herberger warns. "The risks are clear: increased turnover as talent walks out the door, underperformance that drags down productivity, and a contagious drop in morale that can spread like a virus across the organization," he says. "Worse, in a competitive industry, boredom kills innovation, leaving your company vulnerable to being outpaced by more engaged and agile competitors." A disengaged IT team, or team subset, can negatively impact business performance, since members are probably not contributing to their full abilities. ... To reinvigorate a sagging IT team, Herberger recommends shaking things up by introducing fresh challenges and innovation opportunities: "Whether it's rotating team roles, fostering a culture of collaboration, or carving out time for passion projects, the goal is clear: disrupt the routine, reawaken creativity, and make the team feel like they're part of something bigger than just punching the clock." ... Daly recommends that IT leaders stay close to their workforce in order to understand their engagement levels, manage mundane tasks effectively, and create space for more interesting assignments. To help prevent disengagement, he suggests offering learning opportunities and activities that promote development and growth.


Why and how to craft an effective hyperscale cloud exit strategy

If a business does choose to go down the hyperscale route, my advice is to formulate an exit plan before onboarding. It’s a key part of contingency planning and should be thought through and finalized before any vendor contract is signed. A cloud exit strategy acts as an insurance policy for events that are both inside and outside of an organization’s control. ... An organization should bring together representatives from each area of a business, ranging from the IT leadership and technology architecture teams, to procurement and sourcing, legal and compliance, and finance. Together, they need to understand how the current infrastructure set up is designed and the specific servers that are being used. They also need to carry out a detailed audit of what’s included in their monthly bills, any major inefficiencies, and details of platform integrations and tightly coupled systems. Having this information will make it far easier to plan out a phased exit from hyperscale cloud, or better facilitate a seamless move to a smaller, private cloud environment. ... And lastly, any exit plan should budget for migration costs, which are often overlooked. The budget should include the cost of hardware for on-prem and colocation options, the cost of hosting for infrastructure as a service (IaaS) options, data migration fees, labor costs, post migration expenses and costs of any service overlaps. 



Quote for the day:

"What seems to us as bitter trials are often blessings in disguise." -- Oscar Wilde

Daily Tech Digest - December 04, 2024

Will AI help doctors decide whether you live or die?

One of the things GPT-4 “was terrible at” compared to human doctors is causally linked diagnoses, Rodman said. “There was a case where you had to recognize that a patient had dermatomyositis, an autoimmune condition responding to cancer, because of colon cancer. The physicians mostly recognized that the patient had colon cancer, and it was causing dermatomyositis. GPT got really stuck,” he said. IDC’s Shegewi points out that if AI models are not tuned rigorously and with “proper guardrails” or safety mechanisms, the technology can provide “plausible but incorrect information, leading to misinformation. “Clinicians may also become de-skilled as over-reliance on the outputs of AI diminishes critical thinking,” Shegewi said. “Large-scale deployments will likely raise issues concerning patient data privacy and regulatory compliance. The risk for bias, inherent in any AI model, is also huge and might harm underrepresented populations.” Additionally, AI’s increasing use by healthcare insurance companies doesn’t typically translate into what’s best for a patient. Doctors who face an onslaught of AI-generated patient care denials from insurance companies are fighting back — and they’re using the same technology to automate their appeals.


The Rise Of ‘Quiet Hiring’: 5 Ways To Use Trend For A Career Advantage

Adaptability is key in quiet hiring. When I interviewed Ross Thornley, Co-founder of AQai, an organization that provides adaptability training, he said, "We’re entering a period of volatility where expanding adaptability skills is essential." Whether it’s learning to manage budgets, mastering new software, or brushing up on leadership skills, the more versatile you are, the more indispensable you become. ... You might feel uncomfortable tooting your own horn, but staying silent about your successes can hurt you in the long run. Keep track of your achievements as you take on extra responsibilities. Highlight the skills you’re building and the results you’re delivering. Then, share them in conversations with your manager or during performance reviews. By showcasing your value, you ensure your work doesn’t go unnoticed. ... When holding onto status-quo ways, employees limit themselves from reaching heights that might improve engagement. Without exploration, there’s a greater potential to be misaligned with a job or responsibility that isn’t motivating. Every new role—whether formal or not—is an opportunity to grow and explore. Use this time to test out roles you might not have considered. See if you enjoy the work or if it’s a stepping stone to something even better.


Creating a unified data, AI and infrastructure strategy to scale innovation ambitions

To effectively leverage data and AI, organisations must first shift their mindset from merely collecting data to actively connecting the dots. This involves identifying the core problem that needs to be addressed and focusing on use cases that will yield maximum business impact, rather than isolating data collection and AI model development. ... To enhance AI implementation, organisations should shift from a use-case-driven approach to a capability-driven strategy, focusing on building reusable AI capabilities such as conversational AI and voice analytics for both internal and external service desks. A company exploring numerous use cases can then group them into distinct capabilities for greater efficiency. Establishing a centralised team dedicated to data, AI and infrastructure is essential to create a robust foundation and platform while allowing business units to develop their own AI-powered applications on top, ensuring consistency across the organisation. ... To succeed in scaling innovation and AI, organisations must move from merely collecting data to actively connecting data, AI and infrastructure. Today’s advancements in cloud and data management technologies enable this integration, fostering collaboration and driving innovation at scale.


AWS introduces S3 Tables, a new bucket type for data analytics

The new bucket type is S3 Table, for storing data in Apache Iceberg format. Iceberg is an open table format (OTF) used for storing data for analytics, and with richer features than Parquet alone. Parquet is the format used by Hadoop and by many data processing frameworks. Parquet and Iceberg are already widely used on S3, so why a new bucket type? Warfield said the popularity of Parquet in S3 was the rationale for S3 Tables. "We actually serve about 15 million requests per second to Parquet tables," he told us, but there is a maintenance burden. Internally, he said, "the structure of them is a lot like git, a ledger of changes, and the mutations get added as snapshots. Even with a relatively low rate of updates into your OTF you can quickly end up with hundreds of thousands of objects under your table." The consequence is poor performance. "In the OTF world it was anticipated that this would happen, but it was left to the customer to do the table maintenance tasks," Warfield said. The Iceberg project includes code to expire snapshots and clean up metadata, but it is still necessary "to go and schedule and run those Spark jobs." Apache Spark is a SQL engine for large scale data. Parquet on S3 was "a storage system on top of a storage system," said Warfield, making it sub-optimal.


Innovation Is Fun, but Infrastructure Pays the Bills

Innovation and platform infrastructure are intertwined — each move affects the other. Yet, many companies are stumbling because they’re too focused on innovation. They’re churning out apps, features, and updates at breakneck speed, all while standing on a wobbly foundation. It’s a classic case of putting the cart before the horse, and it affects the intended impact of some really great ideas. A strong platform infrastructure is your ticket to scalability and flexibility. It lets you pivot quickly to meet new market demands, integrate cutting-edge technologies, and expand your services without tearing everything down and starting from scratch. Plus, it trims the fat off your development and deployment times, letting you bring innovative ideas to market faster. Sidestepping platform infrastructure is a recipe for disaster. It can make your application sluggish, prone to crashes, and a sitting duck for cyberattacks. This isn’t just a headache for users — it’s a surefire way to tarnish your product’s reputation and negatively affect its success. Think of it like building a mansion on a shaky foundation; it doesn’t matter how grand it looks if it’s doomed to collapse.


Open-washing and the illusion of AI openness

Open-washing in AI refers to companies overstating their commitment to openness while keeping critical components proprietary. This approach isn’t new. We’ve seen cloud-washing, AI-washing, and now open-washing, all called out here. Marketing firms want the concept of being “open” to put them in a virtuous category of companies that save baby seals from oil spills. I don’t knock them, but let’s not get too far over our skis, billion-dollar tech companies. ... At the heart of open-washing is a distortion of the principles of openness, transparency, and reusability. Transparency in AI would entail publicly documenting how models are developed, trained, fine-tuned, and deployed. This would include full access to the data sets, weights, architectures, and decision-making processes involved in the models’ construction. Most AI companies fall short of this level of transparency. By selectively releasing parts of their models—often stripped of key details—they craft an illusion of openness. Reusability, another pillar of openness, is much the same. Companies allow access to their models via APIs or lightweight downloadable versions but prevent meaningful adaptation by tying usage to proprietary ecosystems. 


Microsoft hit with more litigation accusing it of predatory pricing

“All UK businesses and organizations that bought licenses for Windows Server via Amazon’s AWS, Google Cloud Platform, and Alibaba Cloud may have been overcharged and will be represented in this new ‘opt-out’ collective action,” the law firm statement said. The accusations make sense when viewed from a compliance/regulatory perspective. Although companies are allowed to give volume discounts and to offer other pricing differences for different customers, compliance issues kick in when the company controls an especially high percentage of the market. ... “Put simply, Microsoft is punishing UK businesses and organizations for using Google, Amazon, and Alibaba for cloud computing by forcing them to pay more money for Windows Server. By doing so, Microsoft is trying to force customers into using its cloud computing service, Azure, and restricting competition in the sector,” Stasi said. “This lawsuit aims to challenge Microsoft’s anti-competitive behavior, push them to reveal exactly how much businesses in the UK have been illegally penalized, and return the money to organizations that have been unfairly overcharged.”


Balancing tradition and innovation in the digital age

It’s easy to get carried away by the hype of cutting-edge technology. For me, it’s about making sure that you always ask yourself if you’re solving an actual business problem. That has to be front of mind, as opposed to being solution- or tech-first. You also have to ask yourself if the business problem requires nascent or proven tech? Once you figure that out, the tech side answer is relatively straightforward. So, even with leveraging emerging tech, you need to think congruently about your business model. ... Security is the first thing I looked at. Even in my interview, I said it would be the first thing I looked at, and it has been. Security and privacy are the basic foundations of trust, and customer and community trust is what our business is built on. So, my approach is to spend money to bring in deep expertise, which I have, and empower them to go deep into our current state and be honest about any gaps we might have. And to think about where we implement both tactical and strategic ways to bridge those gaps. It’s also important to be clear about the risk we hold and how long we want to hold it for and focus on building a response plan. So, if and when an incident occurs, we can recover and respond gracefully and have solid comms plans and playbooks in place. 


Threat intelligence and why it matters for cybersecurity

Cyber threat intelligence – who needs it? The short answer is everyone. Cyber threat intelligence is for anyone with a vested interest in the cybersecurity infrastructure of an organization. Although CTI can be tailored to suit any audience, in most cases, threat intelligence teams work closely with the Security Operation Centre (SOC) that monitors and protects a business on a daily basis. Research shows that CTI has proved beneficial to people at all levels of government (national, regional or local), from security officers, police chiefs and policymakers, to information technology specialists and law enforcement officers. It also provides value to many other professionals, such as IT managers, accountants and criminal analysts. ... The creation of cyber threat intelligence is a circular process known as an “intelligence cycle”. In this cycle, which consists of five stages, data collection is planned, implemented and evaluated; the results are then analysed to produce intelligence, which is later disseminated and re-evaluated against new information and consumer feedback. The circularity of the process means that gaps are identified in the intelligence delivered, initiating new collection requirements and launching the intelligence cycle all over again.


Securing AI’s new frontier: Visibility, governance, and mitigating compliance risks

Securing and governing the use of data for AI/ML model training is perhaps the most challenging and pressing issue in AI security. Using confidential or protected information during the training or fine-tuning process comes with the risk that data could be recoverable through model extraction techniques or using common adversarial techniques (i.e., prompt injection, jailbreak). Following data security and least-privilege access best practices is essential for protecting data during development, but bespoke AI runtime threat detection is response is required to avoid exfiltration of data via model responses. ... Securing AI applications in production is equally important as securing the underlying infrastructure and is a key component of maintaining a secure data and AI lifecycle. This requires real-time monitoring of both prompts and responses to identify, notify, and block security and safety threats. A robust AI security solution prevents adversarial attacks like prompt injection, masks sensitive data to prevent exfiltration via a model response, and also addresses safety concerns such as bias, fairness, and harmful content. 



Quote for the day:

"Leading people is like cooking. Don_t stir too much; It annoys the ingredients_and spoils the food" -- Rick Julian

Daily Tech Digest - December 03, 2024

Why DevOps Is Backward and How We Can Solve It

Perhaps the term “DevOps” simply rolls off the tongue better than “OpDev,” but the argument could be made that since development comes first, operations will follow. But if we look under the hood, most shops actually do run “OpDev” pipelines, even though they do not recognize how that came about within the organization. ... Without a very strict CI/CD pipeline and (usually) many team members keeping infrastructure safe and cost efficient, operations is a Sisyphean task, and most importantly it’s slow. ... So we need a better way to handle infrastructure without turning the ops team into firefighters rather than cooperative team members. Correspondingly we want to enable the devs to build unencumbered by strict rule sets as well as preserve the agile nature and fast pace of development. ... More realistic and easily workable methods like Nitric abstract away the platform as a service SDKs from the codebase and replace the developers’ infra requirements with a library of tools that can be referenced exactly the same, no matter where the finalized code is deployed. The operations teams can easily maintain the needed infra patterns in a centralized location, reducing the need to solve issues after code PRs. 


5 dead-end IT skills — and how to avoid becoming obsolete

In software development today, automated testing is already well established and accelerating. But new opportunities in QA will appear focused on what to test and how, he says, along with the skills necessary to identify security risks and other issues with code that’s created by AI. Jobs for experienced software test engineers won’t disappear overnight, but understanding what AI brings to the equation and making use of it could be key to stay relevant this area. “In order to survive and extend their career — whatever the job role — humans should master the art of leveraging AI as an assistant and embrace it,” Palaniappan says. ... “With the growth of cloud-native and serverless databases, employers are now more interested in your understanding of database architecture and data governance in cloud environments,” Lloyd-Townshend says. “To keep moving in the right direction in your career, it’s important to develop adaptive problem-solving skills and not just rely solely on specific technical expertise.” Hafez agrees activities around database management will be a casualty of technological evolution, especially ones focused on “repetitive activities such as backups, maintenance, and optimization.”


The dangers of fashion-driven tech decisions

The fact that some companies are having success with generative AI, or Kubernetes, or whatever, doesn’t mean that you will. Our technology decisions should be driven by what we need, not necessarily by what we read. ... Google created Kubernetes to handle cluster orchestration at massive scale. It’s a microservices-based architecture, and its complexity is only worth it at scale. For many applications, it’s overkill because, let’s face it, most companies shouldn’t pretend to run their IT like Google. So why do so many keep using it even though it clearly is wrong for their needs? ... Andrej Karpathy, part of OpenAI’s founding team and previously director of AI at Tesla, notes that when you prompt an LLM with a question, “You’re not asking some magical AI. You’re asking a human data labeler,” one “whose average essence was lossily distilled into statistical token tumblers that are LLMs.” The machines are good at combing through lots of data to surface answers, but it’s perhaps just a more sophisticated spin on a search engine. ... That might be exactly what you need, but it also might not be. Rather than defaulting to “the answer is generative AI,” regardless of the question, we’d do well to better tune how and when we use generative AI.


The race is on to make AI agents do your online shopping for you

Just as AI chatbots have proven somewhat useful for surfacing information that’s hard to find through search engines, AI shopping agents have the potential to find products or deals that you might not otherwise have found on your own. In theory, these tools could save you hours when you need to book a cheap flight, or help you easily locate a good birthday present for your brother-in-law. ... If AI shopping agents really take off, it could mean fewer people going to online storefronts, where retailers have historically been able to upsell them or promote impulse purchases. It also means that advertisers may not get valuable information about shoppers, so they can be targeted with other products. For that reason, those very advertisers and retailers are unlikely to let AI agents disrupt their industries without a fight. That’s part of why companies like Rabbit and Anthropic are training AI agents to use the ordinary user interface of a website — that is, the bot would use the site just like you do, clicking and typing in a browser in a way that’s largely indistinguishable from a real person. That way, there’s no need to ask permission to use an online service through a back end — permission that could be rescinded if you’re hurting their business.


2025 will be a bad year for remote work

CEOs don’t trust their employees to work hard at home and fear they’re watching daytime TV in their pajamas while on the clock. They intuit office presence and the supervision of employees who appear to be working as a metric for productivity. They can feel personally more comfortable when they can walk around, interact with employees, and manage and supervise in person. Some CEOs also feel the need to justify their spending on office space, office equipment, and other costs associated with office work. Whatever the reasons, there’s a general disagreement between employees, who mostly want the option to work from home, and CEOs, who mostly want to require employees to come into the office. ... The remote work revolution will take a serious hit next year, both in government and business. Then, with new generations of workers and leaders gradually rising in the workforce in the coming decade, plus remote work-enabling technologies like AI (specifically agentic AI) and augmented reality growing in capability, remote work will make a slow, inevitable, and permanent comeback. In the meantime, 2025 will be a rough year for remote workers. Bu it also represents a huge opportunity for startups and even established companies to hire the very best employees who are turned away elsewhere because they insist on working remotely.


Japan’s Next Step With Open-Source Software: Global Strategy

Japanese open-source developers are renowned for their skill, dedication, and meticulous focus on quality and detail. Their contributions have shaped global projects and produced standout achievements, such as the Ruby programming language, which exemplifies Japan's influence in open-source development. However, corporate policies in Japan have often been cautious regarding open source, particularly concerning licensing, lack of resources for future development, security worries, and other perceived limitations. While large Japanese corporations contribute significantly to open-source projects, they lag behind their U.S. and European counterparts in leveraging open-source as a core component of their products and services. This is now beginning to change. Open source is increasingly recognized as a way to accelerate development and expand global reach. Japanese companies are looking to open-source as a tool for increasing the speed of development, not just as a way to get projects up and running. ... It's true that when developing something, you should spend time-solving your own unique problems, and there is a tendency to use tools that can be combined with other existing tools to solve problems that can be solved. 


7 Critical Education Trends That Will Define Learning In 2025

As machines become more efficient at analyzing trends, crunching numbers and generating reports, the value of the skills that they still can’t replicate will grow. This means that educators should increasingly focus on nurturing these soft, "human" skills, like critical thinking, big-picture strategy, communication, emotional intelligence, leadership and teamwork. Expect to see greater integration of these into mainstream education as we train to become more effective at high-value tasks involving person-to-person interactions and navigation of complex and chaotic real-world situations. ... All learners are different – we take in information at different speeds; while some of us absorb knowledge better from videos, some benefit more from group discussions or activity-based learning. Personalized learning promises to deliver education in a way that's tailored to the specific strengths of individual students. This means tailored lesson plans, assessments and learning materials. In 2025 we will see experiments and pilot projects involving using AI to accomplish this begin to move into the mainstream, as well as the emergence of AI tutoring aids that are able to track the progress of students in real time and adjust the delivery of learning on-the-fly to create dynamic and engaging learning environments.


How an Effective AppSec Program Shifts Your Teams From Fixing to Building

While tools and processes are critical, they only address the technical side of the challenge. Ensuring a cohesive culture of cooperation between development and security teams is just as important. There must be a solid partnership between both sides for efforts to succeed. Implementing a security mentorship program can be an effective way to deliver this collaboration. By appointing senior engineers as mentors, organizations can leverage existing expertise to guide developers through secure coding practices. These mentors provide real-time support, offering just-in-time advice when critical vulnerabilities arise. This not only helps resolve security issues faster but also ensures developers can remain focused on delivering high-performance code. Such mentorships are a great opportunity for individual engineers too, offering the chance to broaden their skills and further their careers.   ... Effective AppSec doesn’t have to come at the cost of speed and innovation. Fostering collaboration between development and security teams and integrating security seamlessly into workflows will make lives easier — while ensuring there is minimal impact to production schedules.


The Evolution of Time-Series Models: AI Leading a New Forecasting Era

The power of machine learning (ML) methods in time series forecasting first gained prominence during the M4 and M5 forecasting competitions, where ML-based models significantly outperformed traditional statistical methods for the first time. In the M5 competition (2020), advanced models like LightGBM, DeepAR, and N-BEATS demonstrated the effectiveness of incorporating exogenous variables—factors like weather or holidays that influence the data but aren’t part of the core time series. This approach led to unprecedented forecasting accuracy. These competitions highlighted the importance of cross-learning from multiple related series and paved the way for developing foundation models specifically designed for time series analysis. ... Looking ahead, combining time series models with language models is unlocking exciting innovations. Models like Chronos, Moirai, and TimesFM are pushing the boundaries of time series forecasting, but the next frontier is blending traditional sensor data with unstructured text for even better results. Take the automobile industry—combining sensor data with technician reports and service notes through NLP to get a complete view of potential maintenance issues. 


Treat AI like a human: Redefining cybersecurity

Treating AI like a human is a perspective shift that will fundamentally change how cybersecurity leaders operate. This shift encourages security teams to think of AI as a collaborative partner with human failings. For example, as AI becomes increasingly autonomous, organizations will need to focus on aligning its use with the business’ goals while maintaining reasonable control over its sovereignty. However, organizations will also need to consider in policy and control design AI’s potential to manipulate the truth and produce inadequate results, much like humans do. ... Effective human oversight should include policies and processes for mapping, managing, and measuring AI risk. It also should include accountability structures, so teams and individuals are empowered, responsible, and trained. Organizations should also establish the context to frame risks related to an AI system. AI actors in charge of one part of the process rarely have full visibility or control over other parts. ... Performance indicators include analyzing, assessing, benchmarking, and ultimately monitoring AI risk and related effects. Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI dependencies. 



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein

Daily Tech Digest - December 02, 2024

The end of AI scaling may not be nigh: Here’s what’s next

The concern is that scaling, which has driven advances for years, may not extend to the next generation of models. Reporting suggests that the development of frontier models like GPT-5, which push the current limits of AI, may face challenges due to diminishing performance gains during pre-training. The Information reported on these challenges at OpenAI and Bloomberg covered similar news at Google and Anthropic. This issue has led to concerns that these systems may be subject to the law of diminishing returns — where each added unit of input yields progressively smaller gains. As LLMs grow larger, the costs of getting high-quality training data and scaling infrastructure increase exponentially, reducing the returns on performance improvement in new models. Compounding this challenge is the limited availability of high-quality new data, as much of the accessible information has already been incorporated into existing training datasets. ... While scaling challenges dominate much of the current discourse around LLMs, recent studies suggest that current models are already capable of extraordinary results, raising a provocative question of whether more scaling even matters.


How to talk to your board about tech debt

Instead of opening the conversation about “code quality,” start talking about business outcomes. Rather than discuss “legacy systems,” talk about “revenue bottlenecks,” and replace “technical debt” with “innovation capacity.” When you reframe the conversation this way, technical debt becomes a strategic business issue that directly impacts the value metrics the board cares about most. ... Focus on delivering immediate change in a self-funding way. Double down on automation through AI. Take out costs and use those funds to compress your transformation. ... Here’s where many CIOs stumble: presenting technical debt as a problem that needs to be eliminated. Instead, show how leading companies manage it strategically. Our research reveals that top performers allocate around 15% of their IT budget to debt remediation. This balances debt reduction and prioritizes future strategic innovations, which means committing to continuous updates, upgrades, and management of end-user software, hardware, and associated services. And it translates into an organization that’s stable and innovative. We also found throwing too much money at tech debt can be counterproductive. Our analysis found a distinct relationship between a company’s digital core maturity and technical debt remediation. 


Why You Need More Than A Chief Product Security Officer In The Age Of AI

Security by design means building digital systems and products that have security as their foundation. When building software, a security-by-design approach will involve a thorough risk analysis of the product, considering potential weaknesses that could be exploited by attackers. This is known as threat modeling, and it helps to expand on a desire for "secure" software to ask "security of what?" and "secure from whom?" With these considerations and recommendations, products are designed with the appropriate security controls for the given industry and regulatory environment. To do this well, two teams are needed—the developers and the security team. However, there’s a common misconception that these teams are trained with the same knowledge and skill set to work cohesively. ... As the AI landscape rapidly evolves, businesses must proactively adapt to emerging regulatory requirements; this transformation begins with a fundamental cultural shift. In an era where AI plays a pivotal role in driving innovation, threat modeling should no longer be an afterthought but a pillar of responsible AI leadership. While appointing a chief product security officer is a smart first step, adopting a security-by-design mindset starts by bringing together developer and security teams at the early software design phase.


Enterprise Architecture in 2025 and beyond

The democratisation of AI presents both a challenge and an opportunity for enterprise architects. While generative AI lowers the barrier to entry for coding and data analysis, it also complicates the governance landscape. Organisations must grapple with the reality that, when it comes to skills, anyone can now leverage AI to generate code or analyse data without the traditional oversight mechanisms that have historically been in place. ... The acceleration of technological innovation presents both opportunities and challenges for enterprise architects. With generative AI leading the charge, organisations are compelled to innovate faster than ever before. Yet, this rapid pace raises significant concerns around risk management and regulatory compliance. Enterprise architects must navigate this tension by implementing frameworks that allow for agile innovation while maintaining necessary safeguards. ... In the evolving landscape of EA, the concept of a digital twin of an organisation (DTO) is emerging as a transformative opportunity, and we see this being realised in 2025. ... Outside of 'what-ifs', AI could enable real-time decision-making within DTOs by continuously processing and analysing live data streams. This is particularly valuable for dynamic industries like retail or manufacturing, where market conditions, customer demands, or operational circumstances can shift rapidly.


Clearing the Clouds Around the Shared Responsibility Model

Enterprise leaders need to dig into the documentation for each cloud service they use to understand their organizational responsibilities and to avoid potential gaps and misunderstandings. While there is a definite division of responsibilities, CSPs typically position themselves as partners eager to help their customers uphold their part of cloud security. “The cloud service providers are very interested and invested in their customers understanding the model,” says Armknecht. ... Both parties, customer and provider, have their security responsibilities, but misunderstandings can still arise. In the early days of cloud, the incorrect assumption of automatic security was one of the most common misconceptions enterprise leaders had around cloud. Cloud providers secure the cloud, so any data plunked in the cloud was automatically safe, right? Wrong. ... Even if customers fully understand their responsibilities, they may make mistakes when trying to fulfill them. Misconfigurations are a potential outcome for customers navigating cloud security. It is also possible for misconfigurations to occur on the cloud provider side. “The CIA triad: confidentiality, integrity, and availability. Essentially a misconfiguration or a lack of configuration is going to put one of those things at risk,” says Armknecht. 


Data centers go nuclear for power-hungry AI workloads

AWS, Google, Meta, Microsoft, and Oracle are among the companies exploring nuclear energy. “Nuclear power is a carbon-free, reliable energy source that can complement variable renewable energy sources like wind and solar with firm generation. Advanced nuclear reactors are considered safer and more efficient than traditional nuclear reactors. They can also be built more quickly and in a more modular fashion,” said Amanda Peterson Corio, global head of data center energy at Google. ... “The NRC has, for the last few years, been reviewing both preliminary information and full applications for small modular reactors, including designs that cool the reactor fuel with inert gases, molten salts, or liquid metals. Our reviews have generic schedules of 2 to 3 years, depending on the license or permit being sought,” said Scott Burnell, public affairs officer at the NRC. ... Analysts agree that nuclear is an essential part of a carbon-free, AI-burdened electric grid. “The attraction of nuclear in a world where you’re trying to take the grid to carbon-free energy is that it is really the only proven reliable source of carbon-free energy, one that generates whenever I need it to generate, and I can guarantee that capacity is there, except for the refuel or the maintenance periods,” Uptime Institute’s Dietrich pointed out.


How Banking Leaders Can Enhance Risk and Compliance With AI

On one hand, AI can reduce risk exposure while making regulatory compliance more efficient. AI can also enhance fraud and cybersecurity detection. On the other hand, the complexity of AI models, coupled with concerns around data privacy and algorithmic transparency, requires careful oversight to avoid regulatory pitfalls and maintain customer or member trust. How the industry moves forward will largely depend on pending regulations and the leaps AI science may take, but for now, here is where the current state of affairs lies. ... While AI holds immense potential, its adoption hinges on maintaining account holder confidence. One of the most common concerns expressed by both financial institutions and their account holders is around transparency in AI decision-making. While 73% of financial institutions are convinced that AI can significantly enhance digital account holder experiences, apprehensions about AI’s impact on account holder trust are significant, with 54% expressing concerns over potential negative effects. The concern seems valid, as less than half of consumers feel comfortable with their financial data being processed by AI, even if it gives them a better digital banking experience.


When Prompt Injections Attack: Bing and AI Vulnerabilities

Tricking a chatbot into behaving badly (by “injecting” a cleverly malicious prompt into its input) turns out to be just the beginning. So what should you do when a chatbot tries tricking you back? And are there lessons we can learn — or even bigger issues ahead? ... While erroneous output is often called an AI “hallucination,” Edwards has been credited with popularizing the alternate term “confabulation.” It’s a term from psychology that describes the filling of memory gaps with imaginings. Willison complains that both terms are still derived from known-and-observed human behaviors. But then he acknowledges that it’s probably already too late to stop the trend of projecting humanlike characteristics onto AI. “That ship has sailed…” Is there also a hidden advantage there too? “It turns out, thinking of AIs like human beings is a really useful shortcut for all sorts of things about how you work with them…” “You tell people, ‘Look, it’s gullible.’ You tell people it makes things up, it can hallucinate all of those things. … I do think that the human analogies are effective shortcuts for helping people understand how to use these things and how they work.”


Refactoring AI code: The good, the bad, and the weird

Generative AI is no longer a novelty in the software development world: it’s being increasingly used as an assistant (and sometimes a free agent) to write code running in real-world production. But every developer knows that writing new code from scratch is only a small part of their daily work. Much of a developer’s time is spent maintaining an existing codebase and refactoring code written by other hands. ... “AI-based code typically is syntactically correct but often lacks the clarity or polish that comes from a human developer’s understanding of best practices,” he says. “Developers often need to clean up variable names, simplify logic, or restructure code for better readability.” ... According to Gajjar, “AI tools are known to overengineer solutions so that the code produced is bulkier than it really should be for simple tasks. There are often extraneous steps that developers have to trim off, or a simplified structure must be achieved for efficiency and maintainability.” Nag adds that AI can “throw in error handling and edge cases that aren’t always necessary. It’s like it’s trying to show off everything it knows, even when a simpler solution would suffice.”


How Businesses Can Speed Up AI Adoption

To ensure successful AI adoption, businesses should follow a structured approach that focuses on key strategic steps. First, they should build and curate their organisational data assets. A solid data foundation is crucial for effective AI initiatives, enabling companies to draw meaningful insights that drive accurate AI results and consumer interactions. Next, identifying applicable use cases tailored to specific business needs is essential. This may include generative, visual, or conversational AI applications, ensuring alignment with organisational goals. When investing in AI capabilities, choosing off-the-shelf solutions is advisable, unless there is a compelling business justification for custom development. This allows companies to quickly implement new technologies without accumulating technical debt. Finally, maintaining an active data feedback loop is vital for AI effectiveness. Regularly updating data ensures AI models produce accurate results and helps prevent issues associated with “stale” data, which can hinder performance and limit insights. ... As external pressures such as regulatory changes and shifting consumer expectations create a sense of urgency and complexity, it’s critical that organisations are proactive in overcoming internal obstacles.



Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins