Showing posts with label DeepMind. Show all posts
Showing posts with label DeepMind. Show all posts

Daily Tech Digest - July 16, 2025


Quote for the day:

"Whatever the mind of man can conceive and believe, it can achieve." -- Napoleon Hill


The Seventh Wave: How AI Will Change the Technology Industry

AI presents three threats to the software industry: Cheap code: TuringBots, using generative AI to create software, threatens the low-code/no-code players. Cheap replacement: Software systems, be they CRM or ERP, are structured databases – repositories for client records or financial records. Generative AI, coupled with agentic AI, holds out the promise of a new way to manage this data, opening the door to an enterprising generation of tech companies that will offer AI CRM, AI financials, AI database, AI logistics, etc. ... Better functionality: AI-native systems will continually learn and flex and adapt without millions of dollars of consulting and customization. They hold the promise of being up to date and always ready to take on new business problems and challenges without rebuilds. When the business and process changes, the tech will learn and change. ... On one hand, the legacy software systems that PwC, Deloitte, and others have implemented for decades and that comprise much of their expertise will be challenged in the short term and shrink in the long term. Simultaneously, there will be a massive demand for expertise in AI. Cognizant, Capgemini, and others will be called on to help companies implement AI computing systems and migrate away from legacy vendors. Forrester believes that the tech services sector will grow by 3.6% in 2025.


Software Security Imperative: Forging a Unified Standard of Care

The debate surrounding liability in the open source ecosystem requires careful consideration. Imposing direct liability on individual open source maintainers could stifle the very innovation that drives the industry forward. It risks dismantling the vast ecosystem that countless developers rely upon. ... The software bill of materials (SBOM) is rapidly transitioning from a nascent concept to an undeniable business necessity. As regulatory pressures intensify, driven by a growing awareness of software supply chain risks, a robust SBOM strategy is becoming critical for organizational survival in the tech landscape. But the value of SBOMs extends far beyond a single software development project. While often considered for open source software, an SBOM provides visibility across the entire software ecosystem. It illuminates components from third-party commercial software, helps manage data across merged projects and validates code from external contributors or subcontractors — any code integrated into a larger system. ... The path to a secure digital future requires commitment from all stakeholders. Technology companies must adopt comprehensive security practices, regulators must craft thoughtful policies that encourage innovation while holding organizations accountable and the broader ecosystem must support the collaborative development of practical and effective standards.


The 4 Types of Project Managers

The prophet type is all about taking risks and pushing boundaries. They don’t play by the rules; they make their own. And they’re not just thinking outside the box, they’re throwing the box away altogether. It’s like a rebel without a cause, except this rebel has a cause – growth. These visionaries thrive in ambiguity and uncertainty, seeing potential where others see only chaos or impossibility. They often face resistance from more conservative team members who prefer predictable outcomes and established processes. ... The gambler type is all about taking chances and making big bets. They’re not afraid to roll the dice and see what happens. And while they play by the rules of the game, they don’t have a good business case to back up their bets. It’s like convincing your boss to let you play video games all day because you just have a hunch it will improve your productivity. But don’t worry, the gambler type isn’t just blindly throwing money around. They seek to engage other members of the organization who are also up for a little risk-taking. ... The expert type is all about challenging the existing strategy by pursuing growth opportunities that lie outside the current strategy, but are backed up by solid quantitative evidence. They’re like the detectives of the business world, following the clues and gathering the evidence to make their case. And while the growth opportunities are well-supported and should be feasible, the challenge is getting other organizational members to listen to their advice.


OpenAI, Google DeepMind and Anthropic sound alarm: ‘We may be losing the ability to understand AI’

The unusual cooperation comes as AI systems develop new abilities to “think out loud” in human language before answering questions. This creates an opportunity to peek inside their decision-making processes and catch harmful intentions before they turn into actions. But the researchers warn this transparency is fragile and could vanish as AI technology advances. ... “AI systems that ‘think’ in human language offer a unique opportunity for AI safety: we can monitor their chains of thought for the intent to misbehave,” the researchers explain. But they emphasize that this monitoring capability “may be fragile” and could disappear through various technological developments. ... When AI models misbehave — exploiting training flaws, manipulating data, or falling victim to attacks — they often confess in their reasoning traces. The researchers found examples where models wrote phrases like “Let’s hack,” “Let’s sabotage,” or “I’m transferring money because the website instructed me to” in their internal thoughts. Jakub Pachocki, OpenAI’s chief technology officer and co-author of the paper, described the importance of this capability in a social media post. “I am extremely excited about the potential of chain-of-thought faithfulness & interpretability. It has significantly influenced the design of our reasoning models, starting with o1-preview,” he wrote.


Unmasking AsyncRAT: Navigating the labyrinth of forks

We believe that the groundwork for AsyncRAT was laid earlier by the Quasar RAT, which has been available on GitHub since 2015 and features a similar approach. Both are written in C#; however, their codebases differ fundamentally, suggesting that AsyncRAT was not just a mere fork of Quasar, but a complete rewrite. A fork, in this context, is a personal copy of someone else’s repository that one can freely modify without affecting the original project. The main link that ties them together lies in the custom cryptography classes used to decrypt the malware configuration settings. ... Ever since it was released to the public, AsyncRAT has spawned a multitude of new forks that have built upon its foundation. ... It’s also worth noting that DcRat’s plugin base builds upon AsyncRAT and further extends its functionality. Among the added plugins are capabilities such as webcam access, microphone recording, Discord token theft, and “fun stuff”, a collection of plugins used for joke purposes like opening and closing the CD tray, blocking keyboard and mouse input, moving the mouse, turning off the monitor, etc. Notably, DcRat also introduces a simple ransomware plugin that uses the AES-256 cipher to encrypt files, with the decryption key distributed only once the plugin has been requested.


Repatriating AI workloads? A hefty data center retrofit awaits

CIOs with in-house AI ambitions need to consider compute and networking, in addition to power and cooling, Thompson says. “As artificial intelligence moves from the lab to production, many organizations are discovering that their legacy data centers simply aren’t built to support the intensity of modern AI workloads,” he says. “Upgrading these facilities requires far more than installing a few GPUs.” Rack density is a major consideration, Thompson adds. Traditional data centers were designed around racks consuming 5 to 10 kilowatts, but AI workloads, particularly model training, push this to 50 to 100 kilowatts per rack. “Legacy facilities often lack the electrical backbone, cooling systems, and structural readiness to accommodate this jump,” he says. “As a result, many CIOs are facing a fork in the road: retrofit, rebuild, or rent.” Cooling is also an important piece of the puzzle because not only does it enable AI, but upgrades there can help pay for other upgrades, Thompson says. “By replacing inefficient air-based systems with modern liquid-cooled infrastructure, operators can reduce parasitic energy loads and improve power usage effectiveness,” he says. “This frees up electrical capacity for productive compute use — effectively allowing more business value to be generated per watt. For facilities nearing capacity, this can delay or eliminate the need for expensive utility upgrades or even new construction.”


Burnout, budgets and breaches – how can CISOs keep up?

As ever, collaboration in a crisis is critical. Security teams working closely with backup, resilience and recovery functions are better able to absorb shocks. When the business is confident in its ability to restore operations, security professionals face less pressure and uncertainty. This is also true for communication, especially post-breach. Organisations need to be transparent about how they’re containing the incident and what’s being done to prevent recurrence. ... There is also an element of the blame game going on, with everyone keen to avoid responsibility for an inevitable cyber breach. It’s much easier to point fingers at the IT team than to look at the wider implications or causes of a cyber-attack. Even something as simple as a phishing email can cause widespread problems and is something that individual employees must be aware of. ... To build and retain a capable cybersecurity team amid the widening skills gap, CISOs must lead a shift in both mindset and strategy. By embedding resilience into the core of cyber strategy, CISOs can reduce the relentless pressure to be perfect and create a healthier, more sustainable working environment. But resilience isn’t built in isolation. To truly address burnout and retention, CISOs need C-suite support and cultural change. Cybersecurity must be treated as a shared business-critical priority, not just an IT function. 


We Spend a Lot of Time Thinking Through the Worst - The Christ Hospital Health Network CISO

“We’ve spent a lot of time meeting with our business partners and talking through, ‘Hey, how would this specific part of the organization be able to run if this scenario happened?’” On top of internal preparations, Kobren shares that his team monitors incidents across the industry to draw lessons from real-world events. Given the unique threat landscape, he states, “We do spend a lot of time thinking through those scenarios because we know it’s one of the most attacked industries.” Moving forward, Kobren says that healthcare consistently ranks at the top when it comes to industries frequently targeted by cyberattacks. He elaborates that attackers have recognized the high impact of disrupting hospital services, making ransom demands more effective because organizations are desperate to restore operations. ... To strengthen identity security, Kobren follows a strong, centralized approach to access control. He mentions that the organization aims to manage “all access to all systems,” including remote and cloud-based applications. By integrating services with single sign-on (SSO), the team ensures control over user credentials: “We know that we are in control of your username and password.” This allows them to enforce password complexity, reset credentials when needed, and block accounts if security is compromised. Ultimately, Kobren states, “We want to be in control of as much of that process as possible” when it comes to identity management.


AI requires mature choices from companies

According to Felipe Chies of AWS, elasticity is the key to a successful AI infrastructure. “If you look at how organizations set up their systems, you see that the computing time when using an LLM can vary greatly. This is because the model has to break down the task and reason logically before it can provide an answer. It’s almost impossible to predict this computing time in advance,” says Chies. This requires an infrastructure that can handle this unpredictability: one that is quickly scalable, flexible, and doesn’t involve long waits for new hardware. Nowadays, you can’t afford to wait months for new GPUs, says Chies. The reverse is also important: being able to scale back. ... Ruud Zwakenberg of Red Hat also emphasizes that flexibility is essential in a world that is constantly changing. “We cannot predict the future,” he says. “What we do know for sure is that the world will be completely different in ten years. At the same time, nothing fundamental will change; it’s a paradox we’ve been seeing for a hundred years.” For Zwakenberg, it’s therefore all about keeping options open and being able to anticipate and respond to unexpected developments. According to Zwakenberg, this requires an infrastructural basis that is not rigid, but offers room for curiosity and innovation. You shouldn’t be afraid of surprises. Embrace surprises, Zwakenberg explains. 


Prompt-Based DevOps and the Reimagined Terminal

New AI-driven CLI tools prove there's demand for something more intelligent in the command line, but most are limited — they're single-purpose apps tied to individual model providers instead of full environments. They are geared towards code generation, not infrastructure and production work. They hint at what's possible, but don't deliver the deeper integration AI-assisted development needs. That's not a flaw, it's an opportunity to rethink the terminal entirely. The terminal's core strengths — its imperative input and time-based log of actions — make it the perfect place to run not just commands, but launch agents. By evolving the terminal to accept natural language input, be more system-aware, and provide interactive feedback, we can boost productivity without sacrificing the control engineers rely on. ... With prompt-driven workflows, they don't have to switch between dashboards or copy-paste scripts from wikis because they simply describe what they want done, and an agent takes care of the rest. And because this is taking place in the terminal, the agent can use any CLI to gather and analyze information from across data sources. The result? Faster execution, more consistent results, and fewer mistakes. That doesn't mean engineers are sidelined. Instead, they're overseeing more projects at once. Their role shifts from doing every step to supervising workflows — monitoring agents, reviewing outputs, and stepping in when human judgment is needed.

Daily Tech Digest - April 19, 2025


Quote for the day:

"Good things come to people who wait, but better things come to those who go out and get them." -- Anonymous



AI Agents Are Coming to Work: Are Organizations Equipped?

The promise of agentic AI is already evident in organizations adopting it. Fiserv, the global fintech powerhouse, developed an agentic AI application that autonomously assigns merchant codes to businesses, reducing human intervention to under 1%. Sharbel Shaaya, director of AI operations and intelligent automation at Fiserv, said, "Tomorrow's agentic systems will handle this groundwork natively, amplifying their value." In the automotive world, Ford Motor Company is using agentic AI to amplify car design. Bryan Goodman, director of AI at Ford Motor Company, said, "Traditionally, Ford's designers sculpt physical clay models, a time-consuming process followed by lengthy engineering simulations. One computational fluid dynamics run used to take 15 hours, which AI model predicts the outcome in 10 seconds." ... In regulated industries, compliance adds complexity. Ramnik Bajaj, chief data and analytics officer at United Services Automobile Association, sees agentic AI interpreting requests in insurance but insists on human oversight for tasks such as claims adjudication. "Regulatory constraints demand a human in the loop," Bajaj said. Trust is another hurdle - 61% of organizations cite concerns about errors, bias and data quality. "Scaling AI requires robust governance. Without trust, pilots stay pilots," Sarker said.


Code, cloud, and culture: The tech blueprint transforming Indian workplaces

The shift to hybrid cloud infrastructure is enabling Indian enterprises to modernise their legacy systems while scaling with agility. According to a report by EY India, 90% of Indian businesses believe that cloud transformation is accelerating their AI initiatives. Hybrid cloud environments—which blend on-premise infrastructure with public and private cloud—are becoming the default architecture for industries like banking, insurance, and manufacturing. HDFC Bank, for example, has adopted a hybrid cloud model to offer hyper-personalised customer services and real-time transaction capabilities. This digital core is helping financial institutions respond faster to market changes while maintaining strict regulatory compliance. ... No technological transformation is complete without human capability. The demand for AI-skilled professionals in India has grown 14x between 2016 and 2023, and the country is expected to need over one million AI professionals by 2026. Companies are responding with aggressive reskilling strategies. ... The strategic convergence of AI, SaaS, cloud, and human capital is rewriting the rules of productivity, innovation, and global competitiveness. With forward-looking investments, grassroots upskilling efforts, and a vibrant startup culture, India is poised to define the future of work, not just for itself, but for the world.


Bridging the Gap Between Legacy Infrastructure and AI-Optimized Data Centers

Failure to modernize legacy infrastructure isn’t just a technical hurdle; it’s a strategic risk. Outdated systems increase operational costs, limit scalability, and create inefficiencies that hinder innovation. However, fully replacing existing infrastructure is rarely a practical or cost-effective solution. The path forward lies in a phased approach – modernizing legacy systems incrementally while introducing AI-optimized environments capable of meeting future demands. ... AI’s relentless demand for compute power requires a more diversified and resilient approach to energy sourcing. While Small modular reactors (SMRs) present a promising future solution for scalable, reliable, and low-carbon power generation, they are not yet equipped to serve critical loads in the near term. Consequently, many operators are prioritizing behind-the-meter (BTM) generation, primarily gas-focused solutions, with the potential to implement combined cycle technologies that capture and repurpose steam for additional energy efficiency. ... The future of AI-optimized data centers lies in adaptation, not replacement. Substituting legacy infrastructure on a large scale is prohibitively expensive and disruptive. Instead, a hybrid approach – layering AI-optimized environments alongside existing systems while incrementally retrofitting older infrastructure – provides a more pragmatic path forward.


Why a Culture of Observability Is Key to Technology Success

A successful observability strategy requires fostering a culture of shared responsibility for observability across all teams. By embedding observability throughout the software development life cycle, organizations create a proactive environment where issues are detected and resolved early. This will require observability buy-in across all teams within the organization. ... Teams that prioritize observability gain deeper insights into system performance and user experiences, resulting in faster incident resolution and improved service delivery. Promoting an organizational mindset that values transparency and continuous monitoring is key. ... Shifting observability left into the development process helps teams catch issues earlier, reducing the cost of fixing bugs and enhancing product quality. Developers can integrate observability into code from the outset, ensuring systems are instrumented and monitored at every stage. This is a key step toward the establishment of a culture of observability. ... A big part is making sure that all the stakeholders across the organization, whether high or low in the org chart, understand what’s going on. This means taking feedback. Leadership needs to be involved. This means communicating what you are doing, why you are doing it and what the implications are of doing or not doing it.


Why Agile Software Development is the Future of Engineering

Implementing iterative processes can significantly decrease time-to-market for projects. Statistics show that organizations using adaptive methodologies can increase their release frequency by up to 25%. This approach enables teams to respond promptly to market changes and customer feedback, leading to improved alignment with user expectations. Collaboration among cross-functional teams enhances productivity. In environments that prioritize teamwork, 85% of participants report higher engagement levels, which directly correlates with output quality. Structured daily check-ins allow for quick problem resolution, keeping projects on track and minimizing delays. Frequent iteration facilitates continuous testing and integration, which reduces errors early in the process. According to industry data, teams that deploy in short cycles experience up to 50% fewer defects compared to traditional methodologies. This not only expedites delivery but also enhances the overall reliability of the product. The focus on customer involvement significantly impacts product relevance. Engaging clients throughout the development process can lead to a 70% increase in user satisfaction, as adjustments are made in real time. Clients appreciate seeing their feedback implemented quickly, fostering a sense of ownership over the final product.


Why Risk Management Is Key to Sustainable Business Growth

Recent bank collapses demonstrate that a lack of effective risk management strategies can cause serious consequences for financial institutions, their customers, and the economy. A comprehensive risk management strategy is a tool to help banks protect assets, customers, and larger economic problems. ... Risk management is heavily driven by data analytics and identifying patterns in historical data. Predictive models and machine learning can forecast financial losses and detect risks and customer fraud. Additionally, banks can use predictive analytics for proactive decision-making. Data accuracy is of the utmost importance in this case because analysts use that information to make decisions about investments, customer loans, and more. Some banks rely on artificial intelligence (AI) to help detect customer defaults in more dynamic ways. For example, AI could be used in training cross-domain data to better understand customer behavior, or it could be used to make real-time decisions by incorporating real-time changes in the market data. It also improves the customer experience by offering answers through highly trained chatbots, thereby increasing customer satisfaction and reducing reputation risk. Enterprises are training generative AI (GenAI) to be virtual regulatory and policy experts to answer questions about regulations, company policies, and guidelines. 


How U.S. tariffs could impact cloud computing

Major cloud providers are commonly referred to as hyperscalers, and include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. They initially may absorb rising cloud costs to avoid risking market share by passing them on to customers. However, tariffs on hardware components such as servers and networking equipment will likely force them to reconsider their financial models, which means enterprises can expect eventual, if not immediate, price increases. ... As hyperscalers adapt to increased costs by exploring nearshoring or regional manufacturing, these shifts may permanently change cloud pricing dynamics. Enterprises that rely on public cloud services may need to plan for contract renegotiations and higher costs in the coming years, particularly as hardware supply chains remain volatile. The financial strain imposed by tariffs also has a ripple effect, indirectly affecting cloud adoption rates. ... Adaptability and agility remain essential for both providers and enterprises. For cloud vendors, resilience in the supply chain and efficiency in hardware will be critical. Meanwhile, enterprise leaders must balance cost containment with their broader strategic goals for digital growth. By implementing thoughtful planning and proactive strategies, organizations can navigate these challenges and continue to derive value from the cloud in the years ahead.


CIOs must mind their own data confidence gap

A lack of good data can lead to several problems, says Aidora’s Agarwal. C-level executives — even CIOs — may demand that new products be built when the data isn’t ready, leading to IT leaders who look incompetent because they repeatedly push back on timelines, or to those who pass burden down to their employees. “The teams may get pushed on to build the next set of things that they may not be ready to build,” he says. “This can result in failed initiatives, significantly delayed delivery, or burned-out teams.” To fix this data quality confidence gap, companies should focus on being more transparent across their org charts, Palaniappan advises. Lower-level IT leaders can help CIOs and the C-suite understand their organization’s data readiness needs by creating detailed roadmaps for IT initiatives, including a timeline to fix data problems, he says. “Take a ‘crawl, walk, run’ approach to drive this in the right direction, and put out a roadmap,” he says. “Look at your data maturity in order to execute your roadmap, and then slowly improve upon it.” Companies need strong data foundations, including data strategies focused on business cases, data accessibility, and data security, adds Softserve’s Myronov. Organizations should also employ skeptics to point out potential data problems during AI and other data-driven projects, he suggests.


AI has grown beyond human knowledge, says Google's DeepMind unit

Not only is human judgment an impediment, but the short, clipped nature of prompt interactions never allows the AI model to advance beyond question and answer. "In the era of human data, language-based AI has largely focused on short interaction episodes: e.g., a user asks a question and the agent responds," the researchers write. "The agent aims exclusively for outcomes within the current episode, such as directly answering a user's question." There's no memory, there's no continuity between snippets of interaction in prompting. "Typically, little or no information carries over from one episode to the next, precluding any adaptation over time," write Silver and Sutton. However, in their proposed Age of Experience, "Agents will inhabit streams of experience, rather than short snippets of interaction." Silver and Sutton draw an analogy between streams and humans learning over a lifetime of accumulated experience, and how they act based on long-range goals, not just the immediate task. ... The researchers suggest that the arrival of "thinking" or "reasoning" AI models, such as Gemini, DeepSeek's R1, and OpenAI's o1, may be surpassed by experience agents. The problem with reasoning agents is that they "imitate" human language when they produce verbose output about steps to an answer, and human thought can be limited by its embedded assumptions.


Understanding API Security: Insights from GoDaddy’s FTC Settlement

The FTC’s action against GoDaddy stemmed from the company’s inadequate security practices, which led to multiple data breaches from 2019 to 2022. These breaches exposed sensitive customer data, including usernames, passwords, and employee credentials. ... GoDaddy did not implement multi-factor authentication (MFA) and encryption, leaving customer data vulnerable. Without MFA and robust checks against credential stuffing, attackers could easily exploit stolen or weak credentials to access user accounts. Even with authentication, attackers can abuse authenticated sessions if the underlying API authorization is flawed. ... The absence of rate-limiting, logging, and anomaly detection allowed unauthorized access to 1.2 million customer records. More critically, this lack of deep inspection meant an inability to baseline normal API behavior and detect subtle reconnaissance or the exploitation of unique business logic flaws – attacks that often bypass traditional signature-based tools. ... Inadequate Access Controls: The exposure of admin credentials and encryption keys enabled attackers to compromise websites. Strong access controls are essential to restrict access to sensitive information to authorized personnel only. This highlights the risk not just of credential theft, but of authorization flaws within APIs themselves, where authenticated users gain access to data they shouldn’t.