Daily Tech Digest - December 04, 2024

Will AI help doctors decide whether you live or die?

One of the things GPT-4 “was terrible at” compared to human doctors is causally linked diagnoses, Rodman said. “There was a case where you had to recognize that a patient had dermatomyositis, an autoimmune condition responding to cancer, because of colon cancer. The physicians mostly recognized that the patient had colon cancer, and it was causing dermatomyositis. GPT got really stuck,” he said. IDC’s Shegewi points out that if AI models are not tuned rigorously and with “proper guardrails” or safety mechanisms, the technology can provide “plausible but incorrect information, leading to misinformation. “Clinicians may also become de-skilled as over-reliance on the outputs of AI diminishes critical thinking,” Shegewi said. “Large-scale deployments will likely raise issues concerning patient data privacy and regulatory compliance. The risk for bias, inherent in any AI model, is also huge and might harm underrepresented populations.” Additionally, AI’s increasing use by healthcare insurance companies doesn’t typically translate into what’s best for a patient. Doctors who face an onslaught of AI-generated patient care denials from insurance companies are fighting back — and they’re using the same technology to automate their appeals.


The Rise Of ‘Quiet Hiring’: 5 Ways To Use Trend For A Career Advantage

Adaptability is key in quiet hiring. When I interviewed Ross Thornley, Co-founder of AQai, an organization that provides adaptability training, he said, "We’re entering a period of volatility where expanding adaptability skills is essential." Whether it’s learning to manage budgets, mastering new software, or brushing up on leadership skills, the more versatile you are, the more indispensable you become. ... You might feel uncomfortable tooting your own horn, but staying silent about your successes can hurt you in the long run. Keep track of your achievements as you take on extra responsibilities. Highlight the skills you’re building and the results you’re delivering. Then, share them in conversations with your manager or during performance reviews. By showcasing your value, you ensure your work doesn’t go unnoticed. ... When holding onto status-quo ways, employees limit themselves from reaching heights that might improve engagement. Without exploration, there’s a greater potential to be misaligned with a job or responsibility that isn’t motivating. Every new role—whether formal or not—is an opportunity to grow and explore. Use this time to test out roles you might not have considered. See if you enjoy the work or if it’s a stepping stone to something even better.


Creating a unified data, AI and infrastructure strategy to scale innovation ambitions

To effectively leverage data and AI, organisations must first shift their mindset from merely collecting data to actively connecting the dots. This involves identifying the core problem that needs to be addressed and focusing on use cases that will yield maximum business impact, rather than isolating data collection and AI model development. ... To enhance AI implementation, organisations should shift from a use-case-driven approach to a capability-driven strategy, focusing on building reusable AI capabilities such as conversational AI and voice analytics for both internal and external service desks. A company exploring numerous use cases can then group them into distinct capabilities for greater efficiency. Establishing a centralised team dedicated to data, AI and infrastructure is essential to create a robust foundation and platform while allowing business units to develop their own AI-powered applications on top, ensuring consistency across the organisation. ... To succeed in scaling innovation and AI, organisations must move from merely collecting data to actively connecting data, AI and infrastructure. Today’s advancements in cloud and data management technologies enable this integration, fostering collaboration and driving innovation at scale.


AWS introduces S3 Tables, a new bucket type for data analytics

The new bucket type is S3 Table, for storing data in Apache Iceberg format. Iceberg is an open table format (OTF) used for storing data for analytics, and with richer features than Parquet alone. Parquet is the format used by Hadoop and by many data processing frameworks. Parquet and Iceberg are already widely used on S3, so why a new bucket type? Warfield said the popularity of Parquet in S3 was the rationale for S3 Tables. "We actually serve about 15 million requests per second to Parquet tables," he told us, but there is a maintenance burden. Internally, he said, "the structure of them is a lot like git, a ledger of changes, and the mutations get added as snapshots. Even with a relatively low rate of updates into your OTF you can quickly end up with hundreds of thousands of objects under your table." The consequence is poor performance. "In the OTF world it was anticipated that this would happen, but it was left to the customer to do the table maintenance tasks," Warfield said. The Iceberg project includes code to expire snapshots and clean up metadata, but it is still necessary "to go and schedule and run those Spark jobs." Apache Spark is a SQL engine for large scale data. Parquet on S3 was "a storage system on top of a storage system," said Warfield, making it sub-optimal.


Innovation Is Fun, but Infrastructure Pays the Bills

Innovation and platform infrastructure are intertwined — each move affects the other. Yet, many companies are stumbling because they’re too focused on innovation. They’re churning out apps, features, and updates at breakneck speed, all while standing on a wobbly foundation. It’s a classic case of putting the cart before the horse, and it affects the intended impact of some really great ideas. A strong platform infrastructure is your ticket to scalability and flexibility. It lets you pivot quickly to meet new market demands, integrate cutting-edge technologies, and expand your services without tearing everything down and starting from scratch. Plus, it trims the fat off your development and deployment times, letting you bring innovative ideas to market faster. Sidestepping platform infrastructure is a recipe for disaster. It can make your application sluggish, prone to crashes, and a sitting duck for cyberattacks. This isn’t just a headache for users — it’s a surefire way to tarnish your product’s reputation and negatively affect its success. Think of it like building a mansion on a shaky foundation; it doesn’t matter how grand it looks if it’s doomed to collapse.


Open-washing and the illusion of AI openness

Open-washing in AI refers to companies overstating their commitment to openness while keeping critical components proprietary. This approach isn’t new. We’ve seen cloud-washing, AI-washing, and now open-washing, all called out here. Marketing firms want the concept of being “open” to put them in a virtuous category of companies that save baby seals from oil spills. I don’t knock them, but let’s not get too far over our skis, billion-dollar tech companies. ... At the heart of open-washing is a distortion of the principles of openness, transparency, and reusability. Transparency in AI would entail publicly documenting how models are developed, trained, fine-tuned, and deployed. This would include full access to the data sets, weights, architectures, and decision-making processes involved in the models’ construction. Most AI companies fall short of this level of transparency. By selectively releasing parts of their models—often stripped of key details—they craft an illusion of openness. Reusability, another pillar of openness, is much the same. Companies allow access to their models via APIs or lightweight downloadable versions but prevent meaningful adaptation by tying usage to proprietary ecosystems. 


Microsoft hit with more litigation accusing it of predatory pricing

“All UK businesses and organizations that bought licenses for Windows Server via Amazon’s AWS, Google Cloud Platform, and Alibaba Cloud may have been overcharged and will be represented in this new ‘opt-out’ collective action,” the law firm statement said. The accusations make sense when viewed from a compliance/regulatory perspective. Although companies are allowed to give volume discounts and to offer other pricing differences for different customers, compliance issues kick in when the company controls an especially high percentage of the market. ... “Put simply, Microsoft is punishing UK businesses and organizations for using Google, Amazon, and Alibaba for cloud computing by forcing them to pay more money for Windows Server. By doing so, Microsoft is trying to force customers into using its cloud computing service, Azure, and restricting competition in the sector,” Stasi said. “This lawsuit aims to challenge Microsoft’s anti-competitive behavior, push them to reveal exactly how much businesses in the UK have been illegally penalized, and return the money to organizations that have been unfairly overcharged.”


Balancing tradition and innovation in the digital age

It’s easy to get carried away by the hype of cutting-edge technology. For me, it’s about making sure that you always ask yourself if you’re solving an actual business problem. That has to be front of mind, as opposed to being solution- or tech-first. You also have to ask yourself if the business problem requires nascent or proven tech? Once you figure that out, the tech side answer is relatively straightforward. So, even with leveraging emerging tech, you need to think congruently about your business model. ... Security is the first thing I looked at. Even in my interview, I said it would be the first thing I looked at, and it has been. Security and privacy are the basic foundations of trust, and customer and community trust is what our business is built on. So, my approach is to spend money to bring in deep expertise, which I have, and empower them to go deep into our current state and be honest about any gaps we might have. And to think about where we implement both tactical and strategic ways to bridge those gaps. It’s also important to be clear about the risk we hold and how long we want to hold it for and focus on building a response plan. So, if and when an incident occurs, we can recover and respond gracefully and have solid comms plans and playbooks in place. 


Threat intelligence and why it matters for cybersecurity

Cyber threat intelligence – who needs it? The short answer is everyone. Cyber threat intelligence is for anyone with a vested interest in the cybersecurity infrastructure of an organization. Although CTI can be tailored to suit any audience, in most cases, threat intelligence teams work closely with the Security Operation Centre (SOC) that monitors and protects a business on a daily basis. Research shows that CTI has proved beneficial to people at all levels of government (national, regional or local), from security officers, police chiefs and policymakers, to information technology specialists and law enforcement officers. It also provides value to many other professionals, such as IT managers, accountants and criminal analysts. ... The creation of cyber threat intelligence is a circular process known as an “intelligence cycle”. In this cycle, which consists of five stages, data collection is planned, implemented and evaluated; the results are then analysed to produce intelligence, which is later disseminated and re-evaluated against new information and consumer feedback. The circularity of the process means that gaps are identified in the intelligence delivered, initiating new collection requirements and launching the intelligence cycle all over again.


Securing AI’s new frontier: Visibility, governance, and mitigating compliance risks

Securing and governing the use of data for AI/ML model training is perhaps the most challenging and pressing issue in AI security. Using confidential or protected information during the training or fine-tuning process comes with the risk that data could be recoverable through model extraction techniques or using common adversarial techniques (i.e., prompt injection, jailbreak). Following data security and least-privilege access best practices is essential for protecting data during development, but bespoke AI runtime threat detection is response is required to avoid exfiltration of data via model responses. ... Securing AI applications in production is equally important as securing the underlying infrastructure and is a key component of maintaining a secure data and AI lifecycle. This requires real-time monitoring of both prompts and responses to identify, notify, and block security and safety threats. A robust AI security solution prevents adversarial attacks like prompt injection, masks sensitive data to prevent exfiltration via a model response, and also addresses safety concerns such as bias, fairness, and harmful content. 



Quote for the day:

"Leading people is like cooking. Don_t stir too much; It annoys the ingredients_and spoils the food" -- Rick Julian

Daily Tech Digest - December 03, 2024

Why DevOps Is Backward and How We Can Solve It

Perhaps the term “DevOps” simply rolls off the tongue better than “OpDev,” but the argument could be made that since development comes first, operations will follow. But if we look under the hood, most shops actually do run “OpDev” pipelines, even though they do not recognize how that came about within the organization. ... Without a very strict CI/CD pipeline and (usually) many team members keeping infrastructure safe and cost efficient, operations is a Sisyphean task, and most importantly it’s slow. ... So we need a better way to handle infrastructure without turning the ops team into firefighters rather than cooperative team members. Correspondingly we want to enable the devs to build unencumbered by strict rule sets as well as preserve the agile nature and fast pace of development. ... More realistic and easily workable methods like Nitric abstract away the platform as a service SDKs from the codebase and replace the developers’ infra requirements with a library of tools that can be referenced exactly the same, no matter where the finalized code is deployed. The operations teams can easily maintain the needed infra patterns in a centralized location, reducing the need to solve issues after code PRs. 


5 dead-end IT skills — and how to avoid becoming obsolete

In software development today, automated testing is already well established and accelerating. But new opportunities in QA will appear focused on what to test and how, he says, along with the skills necessary to identify security risks and other issues with code that’s created by AI. Jobs for experienced software test engineers won’t disappear overnight, but understanding what AI brings to the equation and making use of it could be key to stay relevant this area. “In order to survive and extend their career — whatever the job role — humans should master the art of leveraging AI as an assistant and embrace it,” Palaniappan says. ... “With the growth of cloud-native and serverless databases, employers are now more interested in your understanding of database architecture and data governance in cloud environments,” Lloyd-Townshend says. “To keep moving in the right direction in your career, it’s important to develop adaptive problem-solving skills and not just rely solely on specific technical expertise.” Hafez agrees activities around database management will be a casualty of technological evolution, especially ones focused on “repetitive activities such as backups, maintenance, and optimization.”


The dangers of fashion-driven tech decisions

The fact that some companies are having success with generative AI, or Kubernetes, or whatever, doesn’t mean that you will. Our technology decisions should be driven by what we need, not necessarily by what we read. ... Google created Kubernetes to handle cluster orchestration at massive scale. It’s a microservices-based architecture, and its complexity is only worth it at scale. For many applications, it’s overkill because, let’s face it, most companies shouldn’t pretend to run their IT like Google. So why do so many keep using it even though it clearly is wrong for their needs? ... Andrej Karpathy, part of OpenAI’s founding team and previously director of AI at Tesla, notes that when you prompt an LLM with a question, “You’re not asking some magical AI. You’re asking a human data labeler,” one “whose average essence was lossily distilled into statistical token tumblers that are LLMs.” The machines are good at combing through lots of data to surface answers, but it’s perhaps just a more sophisticated spin on a search engine. ... That might be exactly what you need, but it also might not be. Rather than defaulting to “the answer is generative AI,” regardless of the question, we’d do well to better tune how and when we use generative AI.


The race is on to make AI agents do your online shopping for you

Just as AI chatbots have proven somewhat useful for surfacing information that’s hard to find through search engines, AI shopping agents have the potential to find products or deals that you might not otherwise have found on your own. In theory, these tools could save you hours when you need to book a cheap flight, or help you easily locate a good birthday present for your brother-in-law. ... If AI shopping agents really take off, it could mean fewer people going to online storefronts, where retailers have historically been able to upsell them or promote impulse purchases. It also means that advertisers may not get valuable information about shoppers, so they can be targeted with other products. For that reason, those very advertisers and retailers are unlikely to let AI agents disrupt their industries without a fight. That’s part of why companies like Rabbit and Anthropic are training AI agents to use the ordinary user interface of a website — that is, the bot would use the site just like you do, clicking and typing in a browser in a way that’s largely indistinguishable from a real person. That way, there’s no need to ask permission to use an online service through a back end — permission that could be rescinded if you’re hurting their business.


2025 will be a bad year for remote work

CEOs don’t trust their employees to work hard at home and fear they’re watching daytime TV in their pajamas while on the clock. They intuit office presence and the supervision of employees who appear to be working as a metric for productivity. They can feel personally more comfortable when they can walk around, interact with employees, and manage and supervise in person. Some CEOs also feel the need to justify their spending on office space, office equipment, and other costs associated with office work. Whatever the reasons, there’s a general disagreement between employees, who mostly want the option to work from home, and CEOs, who mostly want to require employees to come into the office. ... The remote work revolution will take a serious hit next year, both in government and business. Then, with new generations of workers and leaders gradually rising in the workforce in the coming decade, plus remote work-enabling technologies like AI (specifically agentic AI) and augmented reality growing in capability, remote work will make a slow, inevitable, and permanent comeback. In the meantime, 2025 will be a rough year for remote workers. Bu it also represents a huge opportunity for startups and even established companies to hire the very best employees who are turned away elsewhere because they insist on working remotely.


Japan’s Next Step With Open-Source Software: Global Strategy

Japanese open-source developers are renowned for their skill, dedication, and meticulous focus on quality and detail. Their contributions have shaped global projects and produced standout achievements, such as the Ruby programming language, which exemplifies Japan's influence in open-source development. However, corporate policies in Japan have often been cautious regarding open source, particularly concerning licensing, lack of resources for future development, security worries, and other perceived limitations. While large Japanese corporations contribute significantly to open-source projects, they lag behind their U.S. and European counterparts in leveraging open-source as a core component of their products and services. This is now beginning to change. Open source is increasingly recognized as a way to accelerate development and expand global reach. Japanese companies are looking to open-source as a tool for increasing the speed of development, not just as a way to get projects up and running. ... It's true that when developing something, you should spend time-solving your own unique problems, and there is a tendency to use tools that can be combined with other existing tools to solve problems that can be solved. 


7 Critical Education Trends That Will Define Learning In 2025

As machines become more efficient at analyzing trends, crunching numbers and generating reports, the value of the skills that they still can’t replicate will grow. This means that educators should increasingly focus on nurturing these soft, "human" skills, like critical thinking, big-picture strategy, communication, emotional intelligence, leadership and teamwork. Expect to see greater integration of these into mainstream education as we train to become more effective at high-value tasks involving person-to-person interactions and navigation of complex and chaotic real-world situations. ... All learners are different – we take in information at different speeds; while some of us absorb knowledge better from videos, some benefit more from group discussions or activity-based learning. Personalized learning promises to deliver education in a way that's tailored to the specific strengths of individual students. This means tailored lesson plans, assessments and learning materials. In 2025 we will see experiments and pilot projects involving using AI to accomplish this begin to move into the mainstream, as well as the emergence of AI tutoring aids that are able to track the progress of students in real time and adjust the delivery of learning on-the-fly to create dynamic and engaging learning environments.


How an Effective AppSec Program Shifts Your Teams From Fixing to Building

While tools and processes are critical, they only address the technical side of the challenge. Ensuring a cohesive culture of cooperation between development and security teams is just as important. There must be a solid partnership between both sides for efforts to succeed. Implementing a security mentorship program can be an effective way to deliver this collaboration. By appointing senior engineers as mentors, organizations can leverage existing expertise to guide developers through secure coding practices. These mentors provide real-time support, offering just-in-time advice when critical vulnerabilities arise. This not only helps resolve security issues faster but also ensures developers can remain focused on delivering high-performance code. Such mentorships are a great opportunity for individual engineers too, offering the chance to broaden their skills and further their careers.   ... Effective AppSec doesn’t have to come at the cost of speed and innovation. Fostering collaboration between development and security teams and integrating security seamlessly into workflows will make lives easier — while ensuring there is minimal impact to production schedules.


The Evolution of Time-Series Models: AI Leading a New Forecasting Era

The power of machine learning (ML) methods in time series forecasting first gained prominence during the M4 and M5 forecasting competitions, where ML-based models significantly outperformed traditional statistical methods for the first time. In the M5 competition (2020), advanced models like LightGBM, DeepAR, and N-BEATS demonstrated the effectiveness of incorporating exogenous variables—factors like weather or holidays that influence the data but aren’t part of the core time series. This approach led to unprecedented forecasting accuracy. These competitions highlighted the importance of cross-learning from multiple related series and paved the way for developing foundation models specifically designed for time series analysis. ... Looking ahead, combining time series models with language models is unlocking exciting innovations. Models like Chronos, Moirai, and TimesFM are pushing the boundaries of time series forecasting, but the next frontier is blending traditional sensor data with unstructured text for even better results. Take the automobile industry—combining sensor data with technician reports and service notes through NLP to get a complete view of potential maintenance issues. 


Treat AI like a human: Redefining cybersecurity

Treating AI like a human is a perspective shift that will fundamentally change how cybersecurity leaders operate. This shift encourages security teams to think of AI as a collaborative partner with human failings. For example, as AI becomes increasingly autonomous, organizations will need to focus on aligning its use with the business’ goals while maintaining reasonable control over its sovereignty. However, organizations will also need to consider in policy and control design AI’s potential to manipulate the truth and produce inadequate results, much like humans do. ... Effective human oversight should include policies and processes for mapping, managing, and measuring AI risk. It also should include accountability structures, so teams and individuals are empowered, responsible, and trained. Organizations should also establish the context to frame risks related to an AI system. AI actors in charge of one part of the process rarely have full visibility or control over other parts. ... Performance indicators include analyzing, assessing, benchmarking, and ultimately monitoring AI risk and related effects. Measuring AI risks includes tracking metrics for trustworthy characteristics, social impact, and human-AI dependencies. 



Quote for the day:

"The distance between insanity and genius is measured only by success." -- Bruce Feirstein

Daily Tech Digest - December 02, 2024

The end of AI scaling may not be nigh: Here’s what’s next

The concern is that scaling, which has driven advances for years, may not extend to the next generation of models. Reporting suggests that the development of frontier models like GPT-5, which push the current limits of AI, may face challenges due to diminishing performance gains during pre-training. The Information reported on these challenges at OpenAI and Bloomberg covered similar news at Google and Anthropic. This issue has led to concerns that these systems may be subject to the law of diminishing returns — where each added unit of input yields progressively smaller gains. As LLMs grow larger, the costs of getting high-quality training data and scaling infrastructure increase exponentially, reducing the returns on performance improvement in new models. Compounding this challenge is the limited availability of high-quality new data, as much of the accessible information has already been incorporated into existing training datasets. ... While scaling challenges dominate much of the current discourse around LLMs, recent studies suggest that current models are already capable of extraordinary results, raising a provocative question of whether more scaling even matters.


How to talk to your board about tech debt

Instead of opening the conversation about “code quality,” start talking about business outcomes. Rather than discuss “legacy systems,” talk about “revenue bottlenecks,” and replace “technical debt” with “innovation capacity.” When you reframe the conversation this way, technical debt becomes a strategic business issue that directly impacts the value metrics the board cares about most. ... Focus on delivering immediate change in a self-funding way. Double down on automation through AI. Take out costs and use those funds to compress your transformation. ... Here’s where many CIOs stumble: presenting technical debt as a problem that needs to be eliminated. Instead, show how leading companies manage it strategically. Our research reveals that top performers allocate around 15% of their IT budget to debt remediation. This balances debt reduction and prioritizes future strategic innovations, which means committing to continuous updates, upgrades, and management of end-user software, hardware, and associated services. And it translates into an organization that’s stable and innovative. We also found throwing too much money at tech debt can be counterproductive. Our analysis found a distinct relationship between a company’s digital core maturity and technical debt remediation. 


Why You Need More Than A Chief Product Security Officer In The Age Of AI

Security by design means building digital systems and products that have security as their foundation. When building software, a security-by-design approach will involve a thorough risk analysis of the product, considering potential weaknesses that could be exploited by attackers. This is known as threat modeling, and it helps to expand on a desire for "secure" software to ask "security of what?" and "secure from whom?" With these considerations and recommendations, products are designed with the appropriate security controls for the given industry and regulatory environment. To do this well, two teams are needed—the developers and the security team. However, there’s a common misconception that these teams are trained with the same knowledge and skill set to work cohesively. ... As the AI landscape rapidly evolves, businesses must proactively adapt to emerging regulatory requirements; this transformation begins with a fundamental cultural shift. In an era where AI plays a pivotal role in driving innovation, threat modeling should no longer be an afterthought but a pillar of responsible AI leadership. While appointing a chief product security officer is a smart first step, adopting a security-by-design mindset starts by bringing together developer and security teams at the early software design phase.


Enterprise Architecture in 2025 and beyond

The democratisation of AI presents both a challenge and an opportunity for enterprise architects. While generative AI lowers the barrier to entry for coding and data analysis, it also complicates the governance landscape. Organisations must grapple with the reality that, when it comes to skills, anyone can now leverage AI to generate code or analyse data without the traditional oversight mechanisms that have historically been in place. ... The acceleration of technological innovation presents both opportunities and challenges for enterprise architects. With generative AI leading the charge, organisations are compelled to innovate faster than ever before. Yet, this rapid pace raises significant concerns around risk management and regulatory compliance. Enterprise architects must navigate this tension by implementing frameworks that allow for agile innovation while maintaining necessary safeguards. ... In the evolving landscape of EA, the concept of a digital twin of an organisation (DTO) is emerging as a transformative opportunity, and we see this being realised in 2025. ... Outside of 'what-ifs', AI could enable real-time decision-making within DTOs by continuously processing and analysing live data streams. This is particularly valuable for dynamic industries like retail or manufacturing, where market conditions, customer demands, or operational circumstances can shift rapidly.


Clearing the Clouds Around the Shared Responsibility Model

Enterprise leaders need to dig into the documentation for each cloud service they use to understand their organizational responsibilities and to avoid potential gaps and misunderstandings. While there is a definite division of responsibilities, CSPs typically position themselves as partners eager to help their customers uphold their part of cloud security. “The cloud service providers are very interested and invested in their customers understanding the model,” says Armknecht. ... Both parties, customer and provider, have their security responsibilities, but misunderstandings can still arise. In the early days of cloud, the incorrect assumption of automatic security was one of the most common misconceptions enterprise leaders had around cloud. Cloud providers secure the cloud, so any data plunked in the cloud was automatically safe, right? Wrong. ... Even if customers fully understand their responsibilities, they may make mistakes when trying to fulfill them. Misconfigurations are a potential outcome for customers navigating cloud security. It is also possible for misconfigurations to occur on the cloud provider side. “The CIA triad: confidentiality, integrity, and availability. Essentially a misconfiguration or a lack of configuration is going to put one of those things at risk,” says Armknecht. 


Data centers go nuclear for power-hungry AI workloads

AWS, Google, Meta, Microsoft, and Oracle are among the companies exploring nuclear energy. “Nuclear power is a carbon-free, reliable energy source that can complement variable renewable energy sources like wind and solar with firm generation. Advanced nuclear reactors are considered safer and more efficient than traditional nuclear reactors. They can also be built more quickly and in a more modular fashion,” said Amanda Peterson Corio, global head of data center energy at Google. ... “The NRC has, for the last few years, been reviewing both preliminary information and full applications for small modular reactors, including designs that cool the reactor fuel with inert gases, molten salts, or liquid metals. Our reviews have generic schedules of 2 to 3 years, depending on the license or permit being sought,” said Scott Burnell, public affairs officer at the NRC. ... Analysts agree that nuclear is an essential part of a carbon-free, AI-burdened electric grid. “The attraction of nuclear in a world where you’re trying to take the grid to carbon-free energy is that it is really the only proven reliable source of carbon-free energy, one that generates whenever I need it to generate, and I can guarantee that capacity is there, except for the refuel or the maintenance periods,” Uptime Institute’s Dietrich pointed out.


How Banking Leaders Can Enhance Risk and Compliance With AI

On one hand, AI can reduce risk exposure while making regulatory compliance more efficient. AI can also enhance fraud and cybersecurity detection. On the other hand, the complexity of AI models, coupled with concerns around data privacy and algorithmic transparency, requires careful oversight to avoid regulatory pitfalls and maintain customer or member trust. How the industry moves forward will largely depend on pending regulations and the leaps AI science may take, but for now, here is where the current state of affairs lies. ... While AI holds immense potential, its adoption hinges on maintaining account holder confidence. One of the most common concerns expressed by both financial institutions and their account holders is around transparency in AI decision-making. While 73% of financial institutions are convinced that AI can significantly enhance digital account holder experiences, apprehensions about AI’s impact on account holder trust are significant, with 54% expressing concerns over potential negative effects. The concern seems valid, as less than half of consumers feel comfortable with their financial data being processed by AI, even if it gives them a better digital banking experience.


When Prompt Injections Attack: Bing and AI Vulnerabilities

Tricking a chatbot into behaving badly (by “injecting” a cleverly malicious prompt into its input) turns out to be just the beginning. So what should you do when a chatbot tries tricking you back? And are there lessons we can learn — or even bigger issues ahead? ... While erroneous output is often called an AI “hallucination,” Edwards has been credited with popularizing the alternate term “confabulation.” It’s a term from psychology that describes the filling of memory gaps with imaginings. Willison complains that both terms are still derived from known-and-observed human behaviors. But then he acknowledges that it’s probably already too late to stop the trend of projecting humanlike characteristics onto AI. “That ship has sailed…” Is there also a hidden advantage there too? “It turns out, thinking of AIs like human beings is a really useful shortcut for all sorts of things about how you work with them…” “You tell people, ‘Look, it’s gullible.’ You tell people it makes things up, it can hallucinate all of those things. … I do think that the human analogies are effective shortcuts for helping people understand how to use these things and how they work.”


Refactoring AI code: The good, the bad, and the weird

Generative AI is no longer a novelty in the software development world: it’s being increasingly used as an assistant (and sometimes a free agent) to write code running in real-world production. But every developer knows that writing new code from scratch is only a small part of their daily work. Much of a developer’s time is spent maintaining an existing codebase and refactoring code written by other hands. ... “AI-based code typically is syntactically correct but often lacks the clarity or polish that comes from a human developer’s understanding of best practices,” he says. “Developers often need to clean up variable names, simplify logic, or restructure code for better readability.” ... According to Gajjar, “AI tools are known to overengineer solutions so that the code produced is bulkier than it really should be for simple tasks. There are often extraneous steps that developers have to trim off, or a simplified structure must be achieved for efficiency and maintainability.” Nag adds that AI can “throw in error handling and edge cases that aren’t always necessary. It’s like it’s trying to show off everything it knows, even when a simpler solution would suffice.”


How Businesses Can Speed Up AI Adoption

To ensure successful AI adoption, businesses should follow a structured approach that focuses on key strategic steps. First, they should build and curate their organisational data assets. A solid data foundation is crucial for effective AI initiatives, enabling companies to draw meaningful insights that drive accurate AI results and consumer interactions. Next, identifying applicable use cases tailored to specific business needs is essential. This may include generative, visual, or conversational AI applications, ensuring alignment with organisational goals. When investing in AI capabilities, choosing off-the-shelf solutions is advisable, unless there is a compelling business justification for custom development. This allows companies to quickly implement new technologies without accumulating technical debt. Finally, maintaining an active data feedback loop is vital for AI effectiveness. Regularly updating data ensures AI models produce accurate results and helps prevent issues associated with “stale” data, which can hinder performance and limit insights. ... As external pressures such as regulatory changes and shifting consumer expectations create a sense of urgency and complexity, it’s critical that organisations are proactive in overcoming internal obstacles.



Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins

Daily Tech Digest - December 01, 2024

Why microservices might be finished as monoliths return with a vengeance

Migrating to a microservice architecture has been known to cause complex interactions between services, circular calls, data integrity issues and, to be honest, it is almost impossible to get rid of the monolith completely. Let’s discuss why some of these issues occur once migrated to the microservices architecture. ... When moving to a microservices architecture, each client needs to be updated to work with the new service APIs. However, because clients are so tied to the monolith’s business logic, this requires refactoring their logic during the migration. Untangling these dependencies without breaking existing functionality takes time. Some client updates are often delayed due to the work’s complexity, leaving some clients still using the monolith database after migration. To avoid this, engineers may create new data models in a new service but keep existing models in the monolith. When models are deeply linked, this leads to data and functions split between services, causing multiple inter-service calls and data integrity issues. ... Data migration is one of the most complex and risky elements of moving to microservices. It is essential to accurately and completely transfer all relevant data to the new microservices. 


InputSnatch – A Side-Channel Attack Allow Attackers Steal The Input Data From LLM Models

Researchers found that both prefix caching and semantic caching, which are used by many major LLM providers, can leak information about what users type in without them meaning to. Attackers can potentially reconstruct private user queries with alarming accuracy by measuring the response time. The lead researcher said, “Our work shows the security holes that come with improving performance. This shows how important it is to put privacy and security first along with improving LLM inference.” “We propose a novel timing-based side-channel attack to execute input theft in LLMs inference. The cache-based attack faces the challenge of constructing candidate inputs in a large search space to hit and steal cached user queries. To address these challenges, we propose two primary components.” “The input constructor uses machine learning and LLM-based methods to learn how words are related to each other, and it also has optimized search mechanisms for generalized input construction.” ... The research team emphasizes the need for LLM service providers and developers to reassess their caching strategies. They suggest implementing robust privacy-preserving techniques to mitigate the risks associated with timing-based side-channel attacks.


Ransomware Gangs Seek Pen Testers to Boost Quality

As cybercriminal groups grow, specialization is a necessity. In fact, as cybercriminal gangs grow, their business structures increasingly resemble a corporation, with full-time staff, software development groups, and finance teams. By creating more structure around roles, cybercriminals can boost economies of scale and increase profits. ... some groups required specialization in roles based on geographical need — one of the earliest forms of contract work for cybercriminals is for those who can physically move cash, a way to break the paper trail. "Of course, there's recruitment for roles across the entire attack life cycle," Maor says. "When you're talking about financial fraud, mule recruitment ... has always been a key part of the business, and of course, development of the software, of malware, and end of services." Cybercriminals' concerns over software security boil down to self-preservation. In the first half of 2024, law enforcement agencies in the US, Australia, and the UK — among other nations — arrested prominent members of several groups, including the ALPHV/BlackCat ransomware group and seized control of BreachForums. The FBI was able to offer a decryption tool for victims of the BlackCat group — another reason why ransomware groups want to shore up their security.


Forget All-Cloud or All-On-Prem: Embrace Hybrid for Agility and Cost Savings

Hybrid isn’t just about cutting costs — it boosts speed, security, and performance. Agile applications run faster in the cloud, where teams can quickly spin up, test, and launch without the limits of on-prem systems. This agility becomes especially valuable when delivering software quickly to meet market demands without compromising the core stability of the entire system. Security and compliance are also critical drivers of hybrid adoption. Regulatory mandates often require data to remain on-premises to ensure compliance with local data residency laws. Hybrid infrastructure allows companies to move customer-facing applications to the cloud while keeping sensitive data on-prem. This separation of data from the front-end layers has become common in sectors like finance and government, where compliance demands and data security are non-negotiable. I have been speaking regularly to the CTOs of two very large banks in the US. They currently manage 15-20% of their workloads in the cloud and estimate the most they will ever have in the cloud would be 40-50%. They tell me the rest will stay on-prem — always — so they will always need to manage a hybrid environment.


Minimizing Attack Surface in the Cloud Environment

The increased dependence and popularity of the cloud environment expands the attack surface. These are the potential entry points, including network devices, applications, and services that attackers can exploit to infiltrate the cloud and access systems and sensitive data. ... Cloud services rely upon APIs for seamless integration with third-party applications or services. As the number of APIs increases, they expand the attack surface for attackers to exploit. Hackers can easily target insecure or poorly designed APIs that lack encryption or robust authentication mechanisms and access data resources, leading to data leaks and account takeover. ... The device or application not approved or supported by the IT team is called shadow IT. Since many of these devices and apps do not undergo the same security controls as the corporate ones, they become more vulnerable to hacking, putting the data stored within them at risk of manipulation. ... Unaddressed security gaps or errors threaten the cloud assets and data. Attackers can exploit misconfiguration and vulnerabilities in the cloud-hosted services, resulting in data breaches and other cyber attacks.


AI & structured cabling: Are they such unusual bedfellows?

The key word here is “structured” (its synonyms include organized, precise and efficient). When “structured” precedes the word “cabling,” it immediately points to a standardized way to design and install a cabling system that will be compliant to international standards, whilst providing a flexible and future-ready approach capable of supporting multiple generations of AI hardware. Typically, an AI data center’s structured cabling will be used to connect pieces of IT hardware together using high-performance, ultra-low loss optical fiber and Cat6A copper. ... What do we know about AI? Network speeds are constantly changing, and it feels like it’s happening on a daily basis. 400G and 800G are a reality today, with 1.6T coming soon. Just a few years ago, who would have believed that it was possible? Structured cabling offers the type of scalability and flexibility needed to accommodate these speed changes and the future growth of AI networks. ... Data centers are the “factory floor” of AI operations, and as AI continues to impact all areas of our lives, it will become increasingly integrated into emerging technologies like 5G, IoT, and Edge computing. This trend will only further emphasize the need for robust and scalable high-speed cabling systems.


Business Automation: Merging Technology and Skills

As technology progresses, business owners are eager for solutions that can handle repetitive tasks, freeing up time for their teams to focus on more strategic activities. One of the most effective strategies to achieve this is through business automation—a combination of technology and human skills that streamlines processes and boosts productivity. Business automation is designed to complement rather than replace human efforts. It helps teams reduce repetitive tasks, allowing them to concentrate on what matters most, such as improving customer satisfaction and driving innovation. By implementing automation, companies can increase productivity as routine jobs—like data entry and scheduling—are managed by automated systems. This shift not only saves time but also minimises errors associated with manual processes. Automation also enables better resource allocation. The insights gained from automated tools empower teams to make informed decisions and direct resources where they are needed most. Furthermore, real-time reporting offers valuable data that supports timely decision-making. Effective team management is crucial for any business, and automation can enhance productivity and accountability. 


Scaffolding for the South Africa National AI Policy Framework

The lack of specific responsibility assignment and cross-sectoral coordination mechanisms undermines the framework’s utility in guiding downstream activity. It is not too early to start articulating appropriate institutional arrangements, or encouraging debates between different models. A proposed multi-stakeholder platform to guide implementation lacks details about representation, participation criteria, and decision-making processes. This institutional uncertainty is further complicated by strained budgets and unclear funding mechanisms for new structures. Next, the framework’s lack of integration with existing policy landscapes is inadequate. There is a value in horizontal policy coherence across trade, competition, and other sectors. Reference to South Africa’s developmental policy course as articulated in the various Medium-Term Strategic Frameworks and in the National Development Plan 2030 would be helpful. There is a focus on transformation, development, and capacity-building, strengthening the intentions set out in the 2019 White Paper on Science, Technology and Innovation, which emphasizes ICT's role in further developmental goals within a socio-economic context that features high unemployment rates.


The DevSecOps Mindset: What It Is and Why You Need It

Navigating the delicate balance between speed and security is challenging for all organizations. That’s why so many are converting to the DevSecOps mindset. That said, it is not all smooth rolling when approaching the transition. Below are a few common factors that stand in the way of the security-first approach:Cultural Resistance: Teams may resist integrating security into fast-moving DevOps pipelines due to the extra initiative that individuals must take. Lack of Security Expertise: Many developers lack the deep security knowledge required to identify vulnerabilities early on due to the fast pace of technological innovations and creative threat actors. Limited Resources for Automation: Smaller organizations may struggle with the cost of automation tools. While DevSecOps incorporation might face a few hurdles, building a culture with regular security and automation brings many advantages that outweigh them. To name a few:Reduced Security Risks: By addressing security from the beginning, vulnerabilities get identified and resolved before they reach production. Organizations using DevSecOps practices experience a 50% reduction in security vulnerabilities compared to those that follow traditional development processes.


Talent in the new normal: How to manage fast-changing tech roles

The new workplace is one where automation and AI will be front and center. This has caught the imagination of today’s CIOs looking to move faster and scale. There’s no part of the business that can’t be automated. But how can the CIO build the culture, skills, and mindset to align with this new era of work, while also fostering growth? It will require CIOs to think differently. What might have worked five years ago will not cut it today. A good culture is key to an organization running effectively. This is why many of the biggest tech companies invest so heavily in making their offices a nice place to be. Culture is one of the intangible factors that make or break a professional’s happiness – and, by extension, their ability to work well. The CIO’s role in managing the organization’s growth is critical. CIOs understand how teams operate and, as a result, are well-placed to support their organization’s hiring and onboarding processes. Here, it’s not just about finding talent with the right skills, but also ensuring they meet the cultural needs of the organization. At a time when skills shortages are still a major challenge, what digital leaders should be looking for are candidates with an open mind and a desire to learn and grow. 



Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman

Daily Tech Digest - November 30, 2024

API Mocking Is Essential to Effective Change Management

A constant baseline is essential when managing API updates. Without it, teams risk diverging from the API’s intended design, resulting in more drift and potentially disruptive breaking changes. API mocks serve as a baseline by accurately simulating the API’s intended behavior and data formats. This enables development and quality assurance teams to compare proposed changes to a standardized benchmark, ensuring that new features or upgrades adhere to the API’s specified architecture before deployment. ... A centralized mocking environment is helpful for teams who have to manage changes over time and monitor API versions. Teams create a transparent, trusted source of truth from a centralized environment where all stakeholders may access the mock API, which forms the basis of version control and change tracking. By making every team operate from the same baseline in keeping with the desired API behavior and structure, this centralized approach helps reduce drift. ... Teams that want to properly use API mocking in change management must include mocking techniques in their daily development processes. These techniques ensure that the API’s documented specifications, implementation and testing environments remain in line, lowering the risk of drift and supporting consistent, open updates.


How Open-Source BI Tools Are Transforming DevOps Pipelines

BI tools automate the tracking of all DevOps processes so one can easily visualize, analyze, and interpret the key metrics. Rather than manually monitoring the metrics, such as the percentage ratio of successfully deployed applications or the time taken to deploy an application, one is now able to simply rely on BI to spot such trends in the first place. This gives one the ability to operationalize insights which saves time and ensures that pipelines are well managed. ... If you are looking for an easy-to-use tool, Metabase is the best option available. It allows you to build dashboards and query databases without the need to write elaborate codes. It also allows the user to retrieve data from a variety of systems, which, from a business perspective, allows a user to measure KPIs, for example, deployment frequency or the occurrence of system-related problems. ... If you have big resources that need monitoring, Superset is perfect. Superset was designed with the concept of big data loads in mind, offering advanced visualization and projection technology for different data storage devices. Businesses with medium-complexity operational structures optimize the usage of Superset thanks to its state-of-the-art data manipulation abilities. 


Inside threats: How can companies improve their cyber hygiene?

Reflecting on the disconnect between IT and end users, Dyer says that there will “always be a disparity between the two classes of employees”. “IT is a core fundamental dependency to allow end users to perform their roles to the best of their ability – delivered as a service for which they consume as customers,” he says. “Users wish to achieve and excel in their employment, and restrictions of IT can be a negative detractor in doing so. He adds that users are seldom consciously trying to compromise the security of an organisation, and that the incompetence in security hygiene is due to a lack of investment, awareness, engagement or reinforcement. “It is the job of IT leaders to bridge that gap [and] partner with their respective peers to build a positive security awareness culture where employees feel empowered to speak up if something doesn’t look right and to believe in the mission of effectively securing the organisation from the evolving world of outside and inside threats.” And to build that culture, Dyer has some advice, such as making policies clearly defined and user-friendly, allowing employees to do their jobs using tech to the best of their ability (with an understanding of the guardrails they have) and instructing them on what to do should something suspicious happen.


Navigating Responsible AI in the FinTech Landscape

Cross-functional collaboration is critical to successful, responsible AI implementation. This requires the engagement of multiple departments, including security, compliance, legal, and AI governance teams, to collectively reassess and reinforce risk management strategies within the AI landscape. Bringing together these diverse teams allows for a more comprehensive understanding of risks and safeguards across departments, contributing to a well-rounded approach to AI governance. A practical way to ensure effective oversight and foster this collaboration is by establishing an AI review board composed of representatives from each key function. This board would serve as a centralized body for overseeing AI policy adherence, compliance, and ethical considerations, ensuring that all aspects of AI risk are addressed cohesively and transparently. Organizations should also focus on creating realistic and streamlined processes for responsible AI use, balancing regulatory requirements with operational feasibility. While it may be tempting to establish one consistent process, for instance, where conformity assessments would be generated for every AI system, this would lead to a significant delay in time to value. Instead, companies should carefully evaluate the value vs. effort of the systems, including any regulatory documentation, before proceeding toward production.


The Future Of IT Leadership: Lessons From INTERPOL

Cyber threats never keep still. The same can be said of the challenges IT leaders face. Historically, IT functions were reactive—fixing problems as they arose. Today, that approach is no longer sufficient. IT leaders must anticipate challenges before they materialise. This proactive stance involves harnessing the power of data, artificial intelligence (AI), and predictive analytics. It is by analysing trends and identifying vulnerabilities that IT leaders can prevent disruptions and position their organisations to respond effectively to emerging risks. This shift from reactive to predictive leadership is essential for navigating the complexities of digital transformation. ... Cybercrime doesn’t respect boundaries, and neither should IT leadership. Successful cybersecurity efforts often rely on partnerships—between businesses, governments, and international organisations. INTERPOL’s Africa Cyber Surge operations demonstrate the power of collaboration in tackling threats at scale. An IT leader needs to adopt a similar mindset by building networks of trust across industries, government agencies, and even with and through competitors. It can help create shared defences against common threats. Besides, collaboration isn’t limited to external partnerships. 


4 prerequisites for IT leaders to navigate today’s era of disruption

IT leaders aren’t just tech wizards, but savvy data merchants. Imagine yourself as a store owner, but instead of shelves stocked with physical goods, your inventory consists of valuable data, insights, and AI/ML products. To succeed, they need to make their data products appealing by understanding customer needs, ensuring products are current, of a high-quality, and organized. Offering value-added services on top of data, like analysis and consulting, can further enhance the appeal. By adopting this mindset and applying business principles, IT leaders can unlock new revenue streams. ... With AI becoming more pervasive, the ethical and responsible use of it is paramount. Leaders must ensure that data governance policies are in place to mitigate risks of bias or discrimination, especially when AI models are trained on biased datasets. Transparency is key in AI, as it builds trust and empowers stakeholders to understand and challenge AI-generated insights. By building a program on the existing foundation of culture, structure, and governance, IT leaders can navigate the complexities of AI while upholding ethical standards and fostering innovation. ... IT leaders need to maintain a balance of intellectual (IQ) and emotional (EQ) intelligence to manage an AI-infused workplace. 


How to Build a Strong and Resilient IT Bench

Since talent is likely to be short in new technology areas and in older tech areas that must still be supported, CIOs should consider a two-pronged approach that develops bench strength talent for new technologies while also ensuring that older infrastructure technologies have talent waiting in the wings. ... Companies that partner with universities and community colleges in their local areas have found a natural synergy with these institutions, which want to ensure that what they teach is relevant to the workplace. This synergy consists of companies offering input for computer science and IT courses and also providing guest lecturers for classes. Those companies bring “real world” IT problems into student labs and offer internships for course credit that enable students to work in company IT departments with an IT staff mentor. ... It’s great to send people to seminars and certification programs, but unless they immediately apply what they learned to an IT project, they’ll soon forget it. Mindful of this, we immediately placed newly trained staff on actual IT projects so they could apply what they learned. Sometimes a more experienced staff member had to mentor them, but it was worth it. Confidence and competence built quickly.


The Growing Quantum Threat to Enterprise Data: What Next?

One of the most significant implications of quantum computing for cybersecurity is its potential to break widely used encryption algorithms. Many of the encryption systems that safeguard sensitive enterprise data today rely on the computational difficulty of certain mathematical problems, such as factoring large numbers or solving discrete logarithms. Classical computers would take an impractical amount of time to crack these encryption schemes, but quantum computers could theoretically solve these problems in a matter of seconds, rendering many of today's security protocols obsolete. ... Recognizing the urgent need to address the quantum threat, the National Institute of Standards and Technology launched a multi-phase effort to develop post-quantum cryptographic standards. After eight years of rigorous research and relentless effort, NIST released the first set of finalized post-quantum encryption standards on Aug. 13. These standards aim to provide a clear and practical framework for organizations seeking to transition to quantum-safe cryptography. The final selection included algorithms for both public-key encryption and digital signatures, two of the most critical components of modern cybersecurity systems.


Are we worse at cloud computing than 10 years ago?

Rapid advancements in cloud technologies combined with mounting pressures for digital transformation have led organizations to hastily adopt cloud solutions without establishing the necessary foundations for success. This is especially common if companies migrate to infrastructure as a service without adequate modernization, which can increase costs and technical debt. ... The growing pressure to adopt AI and generative AI technologies further complicates the situation and adds another layer of complexity. Organizations are caught between the need to move quickly and the requirement for careful, strategic implementation. ... Include thorough application assessment, dependency mapping, and detailed modeling of the total cost of ownership before migration begins. Success metrics must be clearly defined from the outset. ... When it comes to modernization, organizations must consider the appropriate refactoring and cloud-native development based on business value rather than novelty. The overarching goal is to approach cloud adoption as a strategic transformation. We must stop looking at this as a migration from one type of technology to another. Cloud computing and AI will work best when business objectives drive technology decisions rather than the other way around.
A well-structured Data Operating Model integrates data efforts within business units, ensuring alignment with actual business needs. I’ve seen how a "Hub and Spoke" model, which places central governance at the core while embedding data professionals in individual business units, can break down silos. This alignment ensures that data solutions are built to drive specific business outcomes rather than operating in isolation. ... Data leaders must ruthlessly prioritize initiatives that deliver tangible business outcomes. It’s easy to get caught up in hype cycles—whether it’s the latest AI model or a cutting-edge data governance framework—but real success lies in identifying the use cases that have a direct line of sight to revenue or cost savings. ... A common mistake I’ve seen in organizations is focusing too much on static reports or dashboards. The real value comes when data becomes actionable — when it’s integrated into decision-making processes and products. ... Being "data-driven" has become a dangerous buzzword. Overemphasizing data can lead to analysis paralysis. The true measure of success is not how much data you have or how many dashboards you create but the value you deliver to the business. 



Quote for the day:

"Efficiency is doing the thing right. Effectiveness is doing the right thing." -- Peter F. Drucker