Showing posts with label chatGPT. Show all posts
Showing posts with label chatGPT. Show all posts

Daily Tech Digest - September 20, 2025


Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer


Five forces shaping the next wave of quantum innovation

Quantum computers are expected to solve problems currently intractable for even the world’s fastest supercomputers. Their core strengths — efficiently finding hidden patterns in complex datasets and navigating vast optimization challenges — will enable the design of novel drugs and materials, the creation of superior financial algorithms and open new frontiers in cryptography and cybersecurity. ... The quantum ecosystem now largely agrees that simply scaling up today’s computers, which suffer from significant noise and errors that prevent fault-tolerant operation, won’t unlock the most valuable commercial applications. The industry’s focus has shifted to quantum error correction as the key to building robust and scalable fault-tolerant machines. ... Most early quantum computing companies tried a full-stack approach. Now that the industry is maturing, a rich ecosystem of middle-of-the-stack players has emerged. This evolution allows companies to focus on what they do best and buy components and capabilities as needed, such as control systems from Quantum Machines and quantum software development from firms ... recent innovations in quantum networking technology have made a scale-out approach a serious contender. 


Post-Modern Ransomware: When Exfiltration Replaces Encryption

Exfiltration-first attacks have re-written the rules, with stolen data providing criminals with a faster, more reliable payday than the complex mechanics of encryption ever could. The threat of leaking data like financial records, intellectual property, and customer and employee details delivers instant leverage. Unlike encryption, if the victim stands firm and refuses to pay up, criminal groups can always sell their digital loot on the dark web or use it to fuel more targeted attacks. ... Phishing emails, once known for being riddled with tell-tale grammar and spelling mistakes, are now polished, personalized and delivered in perfect English. AI-powered deepfake voices and videos are providing convincing impersonations of executives or trusted colleagues that have defrauded companies for millions. At the same time, attackers are deploying custom chatbots to manage ransom negotiations across multiple victims simultaneously, applying pressure with the relentless efficiency of machines. ... Yet resilience is not simply a matter of dashboards and detection thresholds – it is equally about supporting those on the frontlines. Security leaders already working punishing hours under relentless scrutiny cannot be expected to withstand endless fatigue and a culture of blame without consequence. Organizations must also embed support for their teams into their response frameworks, from clear lines of communication and decompression time to wellbeing checks. 


The Data Sovereignty Challenge: How CIOs Are Adapting in Real Time

The uncertainty is driving concern. “There's been a lot more talk around, ‘Should we be managing sovereign cloud, should we be using on-premises more, should we be relying on our non-North American public contractors?” said Tracy Woo, a principal analyst with researcher and advisory firm Forrester. Ditching a major public cloud provider over sovereignty concerns, however, is not a practical option. These providers often underpin expansive global workloads, so migrating to a new architecture would be time-consuming, costly, and complex. There also isn’t a simple direct switch that companies can make if they’re looking to avoid public cloud; sourcing alternatives must be done thoughtfully, not just in reaction to one challenge. ... “There's a nervousness around deployment of AI, and I think that nervousness comes from -- definitely in conversations with other CIOs -- not knowing the data,” said Bell. Although decoupling from the major cloud providers is impractical on many fronts, issues of sovereignty as well as cost could still push CIOs to embrace a more localized approach, Woo said. “People are realizing that we don't necessarily need all the bells and whistles of the public cloud providers, whether that's for latency or performance reasons, or whether it's for cost or whether that's for sovereignty reasons,” explained Woo. 


Enterprise AI enters the age of agency, but autonomy must be governed

Agentic AI systems don’t just predict or recommend, they act. These intelligent software agents operate with autonomy toward defined business goals, planning, learning, and executing across enterprise workflows. This is not the next version of traditional automation or static bots. It’s a fundamentally different operating paradigm, one that will shape the future of digital enterprises. ... For many enterprises, the last decade of AI investment has focused on surfacing insights: detecting fraud, forecasting demand, and predicting churn. These are valuable outcomes, but they still require humans or rigid automation to respond. Agentic AI closes that gap. These agents combine machine learning, contextual awareness, planning, and decision logic to take goal-directed action. They can process ambiguity, work across systems, resolve exceptions, and adapt over time. ... Agentic AI will not simply automate tasks. It will reshape how work is designed, measured, and managed. As autonomous agents take on operational responsibility, human teams will move toward supervision, exception resolution, and strategic oversight. New KPIs will emerge, not just around cost or cycle time, but around agent quality, business impact, and compliance resilience. This shift will also demand new talent models. Enterprises must upskill teams to manage AI systems, not just processes. 


Cybersecurity in smart cities under scrutiny

The digital transformation of public services involves “an accelerated convergence between IT and OT systems, as well as the massive incorporation of connected IoT devices,” she explains, which gives rise to challenges such as an expanding attack surface or the coexistence of obsolete infrastructure with modern ones, in addition to a lack of visibility and control over devices deployed by multiple providers. ... “According to the European Cyber ​​Security Organisation, 86% of European local governments with IoT deployments have suffered some security breach related to these devices,” she says. Accenture’s Domínguez adds that the challenge is to consider “the fragmentation of responsibilities between administrations, concessionaires, and third parties, which complicates cybersecurity governance and requires advanced coordination models.” De la Cuesta also emphasizes the siloed nature of project development, which significantly hinders the development of an active cybersecurity strategy. ... In the integration of new tools, despite Spain holding a leading position in areas such as 5G, “technology moves much faster than the government’s ability to react,” he says. “It’s not like a private company, which has a certain agility to make investments,” he explains. “Public administration is much slower. Budgets are different. Administrative procedures are extremely long. From the moment a project is first discussed until it is actually executed, many years pass.”


Your SDLC Has an Evil Twin — and AI Built It

Welcome to the shadow SDLC — the one your team built with AI when you weren't looking: It generates code, dependencies, configs, and even tests at machine speed, but without any of your governance, review processes, or security guardrails. ... It’s not just about insecure code sneaking into production, but rather about losing ownership of the very processes you’ve worked to streamline. Your “evil twin” SDLC comes with: Unknown provenance → You can’t always trace where AI-generated code or dependencies came from. Inconsistent reliability → AI may generate tests or configs that look fine but fail in production. Invisible vulnerabilities → Flaws that never hit a backlog because they bypass reviews entirely. ... AI assistants are now pulling in OSS dependencies you didn’t choose — sometimes outdated, sometimes insecure, sometimes flat-out malicious. While your team already uses hygiene tools like Dependabot or Renovate, they’re only table stakes that don’t provide governance. ... The “evil twin” of your SDLC isn’t going away. It’s already here, writing code, pulling dependencies, and shaping workflows. The question is whether you’ll treat it as an uncontrolled shadow pipeline — or bring it under the same governance and accountability as your human-led one. Because in today’s environment, you don’t just own the SDLC you designed. You also own the one AI is building — whether you control it or not.


'ShadowLeak' ChatGPT Attack Allows Hackers to Invisibly Steal Emails

Researchers at Radware realized the issue earlier this spring, when they figured out a way of stealing anything they wanted from Gmail users who integrate ChatGPT. Not only was their trick devilishly simple, but it left no trace on an end user's network — not even an iota of the suspicious Web traffic typical of data exfiltration attacks. As such, the user had no way of detecting the attack, let alone stopping it. ... To perform a ShadowLeak attack, attackers send an outwardly normal-looking email to their target. They surreptitiously embed code in the body of the message, in a format that the recipient will not notice — for example, in extremely tiny text, or white text on a white background. The code should be written in HTML, being standard for email and therefore less suspicious than other, more powerful languages would be. ... The malicious code can instruct the AI to communicate the contents of the victim's emails, or anything else the target has granted ChatGPT access to, to an attacker-controlled server. ... Organizations can try to compensate with their own security controls — for example, by vetting incoming emails with their own tools. However, Geenens points out, "You need something that is smarter than just the regular-expression engines and the state machines that we've built. Those will not work anymore, because there are an infinite number of permutations with which you can write an attack in natural language." 


UK: World’s first quantum computer built using standard silicon chips launched

This is reportedly the first quantum computer to be built using the standard complementary metal-oxide-semiconductor (CMOS) chip fabrication process which is the same transistor technology used in conventional computers. A key part of this approach is building cryoelectronics that connect qubits with control circuits that work at very low temperatures, making it possible to scale up quantum processors greatly. “This is quantum computing’s silicon moment,” James Palles‑Dimmock, Quantum Motion’s CEO, stated. ... In contrast to other quantum computing approaches, the startup used high-volume industrial 300 millimeter chipmaking processes from commercial foundries to produce qubits. The architecture, control stack, and manufacturing approach are all built to scale to host millions of qubits and pave the way for fault-tolerant, utility-scale, and commercially viable quantum computing. “With the delivery of this system, Quantum Motion is on track to bring commercially useful quantum computers to market this decade,” Hugo Saleh, Quantum Motion’s CEO and president, revealed. ... The system’s underlying QPU is built on a tile-based architecture, integrating all compute, readout, and control components into a dense, repeatable array. This design enables future expansion to millions of qubits per chip, with no changes to the system’s physical footprint.


Key strategies to reduce IT complexity

The cloud has multiplied the fragmentation of solutions within companies, expanding the number of environments, vendors, APIs, and integration approaches, which has raised the skill set, necessitated more complex governance, and prompted the emergence of cross-functional roles between IT and business. Cybersecurity also introduces further levels of complexity, introducing new platforms, monitoring tools, regulatory requirements, and risk management approaches that must be overseen by expert personnel. And then there’s shadow IT. With the ease of access to cloud technologies, it’s not uncommon for business units to independently activate services without involving IT, generating further risks. ... “Structured upskilling and reskilling programs are needed to prepare people to manage new technologies,” says Massara. “So is an organizational model capable of managing a growing number of projects, which can no longer be handled in a one-off manner. The approach to project management is changing because the project portfolio has expanded significantly, and a structured PMO is required, with project managers who often no longer reside solely in IT, but directly within the business.” ... While it’s true that an IT system with disparate systems leads to greater complexity, companies are still very cost-conscious and wary about heavily investing in unification right away. But as systems become obsolete, they become more harmonized.


Unshackling IT: Why Third-Party Support Is a Strategic Imperative, Especially for AI

One of the most compelling arguments for independent third-party support is its inherent vendor neutrality. When a company relies solely on a software vendor for support, that vendor naturally has a vested interest in promoting its latest upgrades, cloud migrations, and proprietary solutions. This can create a conflict of interest, potentially pushing customers towards expensive, unnecessary upgrades or discouraging them from exploring alternatives that might be a better fit for their unique needs. ... The recent acquisition of VMware by Broadcom provides a compelling and timely illustration of why third-party support is becoming increasingly critical. Following the merger, many VMware customers have expressed significant dissatisfaction with changes to licensing models, product roadmaps, and, crucially, support. Broadcom has been criticized for restructuring VMware’s offerings and reportedly reducing support for smaller customers, pushing them towards bundled, more expensive solutions. ... The shift towards third-party support isn’t just about cost savings; it’s about regaining control, accessing unbiased expertise, and ensuring business continuity in a rapidly changing technological landscape. For companies making critical decisions about AI integration and managing complex enterprise systems, providers like Spinnaker Support offer a strategic advantage.

Daily Tech Digest - August 31, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson



A Brief History of GPT Through Papers

The first neural network based language translation models operated in three steps (at a high level). An encoder would embed the “source statement” into a vector space, resulting in a “source vector”. Then, the source vector would be mapped to a “target vector” through a neural network and finally a decoder would map the resulting vector to the “target statement”. People quickly realized that the vector that was supposed to encode the source statement had too much responsibility. The source statement could be arbitrarily long. So, instead of a single vector for the entire statement, let’s convert each word into a vector and then have an intermediate element that would pick out the specific words that the decoder should focus more on. ... The mechanism by which the words were converted to vectors was based on recurrent neural networks (RNNs). Details of this can be obtained from the paper itself. These recurrent neural networks relied on hidden states to encode the past information of the sequence. While it’s convenient to have all that information encoded into a single vector, it’s not good for parallelizability since that vector becomes a bottleneck and must be computed before the rest of the sentence can be processed. ... The idea is to give the model demonstrative examples at inference time as opposed to using them to train its parameters. If no such examples are provided in-context, it is called “zero shot”. If one example is provided, “one shot” and if a few are provided, “few shot”.


8 Powerful Lessons from Robert Herjavec at Entrepreneur Level Up That Every Founder Needs to Hear

Entrepreneurs who remain curious — asking questions and seeking insights — often discover pathways others overlook. Instead of dismissing a "no" or a difficult response, Herjavec urged attendees to look for the opportunity behind it. Sometimes, the follow-up question or the willingness to listen more deeply is what transforms rejection into possibility. ... while breakthrough innovations capture headlines, the majority of sustainable businesses are built on incremental improvements, better execution and adapting existing ideas to new markets. For entrepreneurs, this means it's okay if your business doesn't feel revolutionary from day one. What matters is staying committed to evolving, improving and listening to the market. ... setbacks are inevitable in entrepreneurship. The real test isn't whether you'll face challenges, but how you respond to them. Entrepreneurs who can adapt — whether by shifting strategy, reinventing a product or rethinking how they serve customers — are the ones who endure. ... when leaders lose focus, passion or clarity, the organization inevitably follows. A founder's vision and energy cascade down into the culture, decision-making and execution. If leaders drift, so does the company. For entrepreneurs, this is a call to self-reflection. Protect your clarity of purpose. Revisit why you started. And remember that your team looks to you not just for direction, but for inspiration. 


The era of cheap AI coding assistants may be over

Developers have taken to social media platforms and GitHub to express their dissatisfaction over the pricing changes, especially across tools like Claude Code, Kiro, and Cursor, but vendors have not adjusted pricing or made any changes that significantly reduce credits consumption. Analysts don’t see any alternative to reducing the pricing of these tools. "There’s really no alternative until someone figures out the following: how to use cheaper but dumber models than Claude Sonnet 4 to achieve the same user experience and innovate on KVCache hit rate to reduce the effective price per dollar,” said Wei Zhou, head of AI utility research at SemiAnalysis. Considering the market conditions, CIOs and their enterprises need to start absorbing the cost and treat vibe coding tools as a productivity expense, according to Futurum’s Hinchcliffe. “CIOs should start allocating more budgets for vibe coding tools, just as they would do for SaaS, cloud storage, collaboration tools or any other line items,” Hinchcliffe said. “The case of ROI on these tools is still strong: faster shipping, fewer errors, and higher developer throughput. Additionally, a good developer costs six figures annually, while vibe coding tools are still priced in the low-to-mid thousands per seat,” Hinchcliffe added. ... “Configuring assistants to intervene only where value is highest and choosing smaller, faster models for common tasks and saving large-model calls for edge cases could bring down expenditure,” Hinchcliffe added.


AI agents need intent-based blockchain infrastructure

By integrating agents with intent-centric systems, however, we can ensure users fully control their data and assets. Intents are a type of building block for decentralized applications that give users complete control over the outcome of their transactions. Powered by a decentralized network of solvers, agentic nodes that compete to solve user transactions, these systems eliminate the complexity of the blockchain experience while maintaining user sovereignty and privacy throughout the process. ... Combining AI agents and intents will redefine the Web3 experience while keeping the space true to its core values. Intents bridge users and agents, ensuring the UX benefits users expect from AI while maintaining decentralization, sovereignty and verifiability. Intent-based systems will play a crucial role in the next phase of Web3’s evolution by ensuring agents act in users’ best interests. As AI adoption grows, so does the risk of replicating the problems of Web2 within Web3. Intent-centric infrastructure is the key to addressing both the challenges and opportunities that AI agents bring and is necessary to unlock their full potential. Intents will be an essential infrastructure component and a fundamental requirement for anyone integrating or considering integrating AI into DeFi. Intents are not merely a type of UX upgrade or optional enhancement. 


The future of software development: To what can AI replace human developers?

Rather than replacing developers, AI is transforming them into higher-level orchestrators of technology. The emerging model is one of human-AI collaboration, where machines handle the repetitive scaffolding and humans focus on design, strategy, and oversight. In this new world, developers must learn not just to write code, but to guide, prompt, and supervise AI systems. The skillset is expanding from syntax and logic to include abstraction, ethical reasoning, systems thinking, and interdisciplinary collaboration. In other words, AI is not making developers obsolete. It is making new demands on their expertise. ... This shift has significant implications for how we educate the next generation of software professionals. Beyond coding languages, students will need to understand how to evaluate AI- AI-generated output, how to embed ethical standards into automated systems, and how to lead hybrid teams made up of both humans and machines. It also affects how organisations hire and manage talent. Companies must rethink job descriptions, career paths, and performance metrics to account for the impact of AI-enabled development. Leaders must focus on AI literacy, not just technical competence. Professionals seeking to stay ahead of the curve can explore free programs, such as The Future of Software Engineering Led by Emerging Technologies, which introduces the evolving role of AI in modern software development.


Open Data Fabric: Rethinking Data Architecture for AI at Scale

The first principle, unified data access, ensures that agents have federated real-time access across all enterprise data sources without requiring pipelines, data movement, or duplication. Unlike human users who typically work within specific business domains, agents often need to correlate information across the entire enterprise to generate accurate insights. ... The second principle, unified contextual intelligence, involves providing agents with the business and technical understanding to interpret data correctly. This goes far beyond traditional metadata management to include business definitions, domain knowledge, usage patterns, and quality indicators from across the enterprise ecosystem. Effective contextual intelligence aggregates information from metadata, data catalogs, business glossaries, business intelligence tools, and tribal knowledge into a unified layer that agents can access in real-time.  ... Perhaps the most significant principle involves establishing collaborative self-service. This is a significant shift as it means moving from static dashboards and reports to dynamic, collaborative data products and insights that agents can generate and share with each other. The results are trusted “data answers,” or conversational, on-demand data products for the age of AI that include not just query results but also the business context, methodology, lineage, and reasoning that went into generating them.


A Simple Shift in Light Control Could Revolutionize Quantum Computing

A research collaboration led by Vikas Remesh of the Photonics Group at the Department of Experimental Physics, University of Innsbruck, together with partners from the University of Cambridge, Johannes Kepler University Linz, and other institutions, has now demonstrated a way to bypass these challenges. Their method relies on a fully optical process known as stimulated two-photon excitation. This technique allows quantum dots to emit streams of photons in distinct polarization states without the need for electronic switching hardware. In tests, the researchers successfully produced high-quality two-photon states while maintaining excellent single-photon characteristics. ... “The method works by first exciting the quantum dot with precisely timed laser pulses to create a biexciton state, followed by polarization-controlled stimulation pulses that deterministically trigger photon emission in the desired polarization,” explain Yusuf Karli and Iker Avila Arenas, the study’s first authors. ... “What makes this approach particularly elegant is that we have moved the complexity from expensive, loss-inducing electronic components after the single photon emission to the optical excitation stage, and it is a significant step forward in making quantum dot sources more practical for real-world applications,” notes Vikas Remesh, the study’s lead researcher.


AI and the New Rules of Observability

The gap between "monitoring" and true observability is both cultural and technological. Enterprises haven't matured beyond monitoring because old tools weren't built for modern systems, and organizational cultures have been slow to evolve toward proactive, shared ownership of reliability. ... One blind spot is model drift, which occurs when data shifts, rendering its assumptions invalid. In 2016, Microsoft's Tay chatbot was a notable failure due to its exposure to shifting user data distributions. Infrastructure monitoring showed uptime was fine; only semantic observability of outputs would have flagged the model's drift into toxic behavior. Hidden technical debt or unseen complexity in code can undermine observability. In machine learning, or ML, systems, pipelines often fail silently, while retraining processes, feature pipelines and feedback loops create fragile dependencies that traditional monitoring tools may overlook. Another issue is "opacity of predictions." ... AI models often learn from human-curated priorities. If ops teams historically emphasized CPU or network metrics, the AI may overweigh those signals while downplaying emerging, equally critical patterns - for example, memory leaks or service-to-service latency. This can occur as bias amplification, where the model becomes biased toward "legacy priorities" and blind to novel failure modes. Bias often mirrors reality.


Dynamic Integration for AI Agents – Part 1

An integration of components within AI differs from an integration between AI agents. The former relates to integration with known entities that form a deterministic model of information flow. The same relates to inter-application, inter-system and inter-service transactions required by a business process at large. It is based on mapping of business functionality and information (an architecture of the business in organisations) onto available IT systems, applications, and services. The latter shifts the integration paradigm since the very AI Agents decide that they need to integrate with something at runtime based on the overlapping of the statistical LLM and available information, which contains linguistic ties unknown even in the LLM training. That is, an AI Agent does not know what a counterpart — an application, another AI Agent or data source — it would need to cooperate with to solve the overall task given to it by its consumer/user. The AI Agent does not know even if the needed counterpart exists. ... Any AI Agent may have its individual owner and provider. These owners and providers may be unaware of each others and act independently when creating their AI Agents. No AI Agent can be self-sufficient due to its fundamental design — it depends on the prompts and real-world data at runtime. It seems that the approaches to integration and the integration solutions differ for the humanitarian and natural science spheres.


Counteracting Cyber Complacency: 6 Security Blind Spots for Credit Unions

Organizations that conduct only basic vendor vetting lack visibility into the cybersecurity practices of their vendors’ subcontractors. This creates gaps in oversight that attackers can exploit to gain access to an institution’s data. Third-party providers often have direct access to critical systems, making them an attractive target. When they’re compromised, the consequences quickly extend to the credit unions they serve. ... Cybercriminals continue to exploit employee behavior as a primary entry point into financial institutions. Social engineering tactics — such as phishing, vishing, and impersonation — bypass technical safeguards by manipulating people. These attacks rely on trust, familiarity, or urgency to provoke an action that grants the attacker access to credentials, systems, or internal data. ... Many credit unions deliver cybersecurity training on an annual schedule or only during onboarding. These programs often lack depth, fail to differentiate between job functions, and lose effectiveness over time. When training is overly broad or infrequent, staff and leadership alike may be unprepared to recognize or respond to threats. The risk is heightened when the threats are evolving faster than the curriculum. TruStage advises tailoring cyber education to the institution’s structure and risk profile. Frontline staff who manage member accounts face different risks than board members or vendors. 

Daily Tech Digest - April 20, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti



The Digital Twin in Automotive: The Update

According to Digital Twin researcher Julian Gebhard, the industry is moving toward integrated federated systems that allow seamless data exchange and synchronization across tools and platforms. These systems rely on semantic models and knowledge graphs to ensure interoperability and data integrity throughout the product development process. By structuring data as semantic triples (e.g. (Car) → (is colored) → (blue)) data is traversable, transforming raw data to knowledge. Furthermore, it becomes machine-readable, an enabler for collaboration across departments making development more efficient and consistent. The next step is to use Knowledge Graphs to model product data on a value level, instead only connecting metadata. They enable dynamic feedback loops across systems, so that changes in one area, such as simulation results or geometry updates, can automatically influence related systems. This helps maintain consistency and accelerates iteration during development. Moreover, when functional data is represented at the value level, it becomes possible to integrate disparate systems such as simulation and CAD tools into a unified, holistic viewer. In this integrated model, any change in geometry in one system automatically triggers updates in simulation parameters and physical properties, ensuring that the digital twin evolves in tandem with the actual product. 


Wait, what is agentic AI?

AI agents are generally better than generative AI models at organizing, surfacing, and evaluating data. In theory, this makes them less prone to hallucinations. From the HBR article: “The greater cognitive reasoning of agentic AI systems means that they are less likely to suffer from the so-called hallucinations (or invented information) common to generative AI systems. Agentic AI systems also have [a] significantly greater ability to sift and differentiate information sources for quality and reliability, increasing the degree of trust in their decisions.” ... Agentic AI is a paradigm shift on the order of the emergence of LLMs or the shift to SaaS. That is to say, it’s a real thing, but we’re not yet close to understanding exactly how it will change the way we live and work just yet. The adoption curve for agentic AI will have its challenges. There are questions wherever you look: How do you put AI agents into production? How do you test and validate code generated by autonomous agents? How do you deal with security and compliance? What are the ethical implications of relying on AI agents? As we all navigate the adoption curve, we’ll do our best to help our community answer these questions. While building agents might quickly become easier, solving for these downstream impacts is still incomplete.


Contract-as-Code: Why Finance Teams Are Taking Over Your API Contracts

Forward-thinking companies are now applying cloud native principles to contract management. Just as infrastructure became code with tools like Terraform and Ansible, we’re seeing a similar transformation with business agreements becoming “contracts-as-code.” This shift integrates critical contract information directly into the CI/CD pipeline through APIs that connect legal document management with operational workflows. Contract experts at ContractNerds highlight how API connections enable automation and improve workflow management beyond what traditional contract lifecycle management systems can achieve alone. Interestingly, this cloud native contract revolution hasn’t been led by legal departments. From our experience working with over 1,500 companies, contract ownership is rapidly shifting to finance and operations teams, with CFOs becoming the primary stakeholders in contract management systems. ... As cloud native architectures mature, treating business contracts as code becomes essential for maintaining velocity. Successful organizations will break down the artificial boundary between technical contracts (APIs) and business contracts (legal agreements), creating unified systems where all obligations and dependencies are visible, trackable, and automatable.


ChatGPT can remember more about you than ever before – should you be worried?

Persistent memory could be hugely useful for work. Julian Wiffen, Chief of AI and Data Science at Matillion, a data integration platform with AI built in, sees strong use cases: “It could improve continuity for long-term projects, reduce repeated prompts, and offer a more tailored assistant experience," he says. But he’s also wary. “In practice, there are serious nuances that users, and especially companies, need to consider.” His biggest concerns here are privacy, control, and data security. ... OpenAI stresses that users can still manage memory – delete individual memories that aren't relevant anymore, turn it off entirely, or use the new “Temporary Chat” button. This now appears at the top of the chat screen for conversations that are not informed by past memories and won't be used to build new ones either. However, Wiffen says that might not be enough. “What worries me is the lack of fine-grained control and transparency,” he says. “It's often unclear what the model remembers, how long it retains information, and whether it can be truly forgotten.” ... “Even well-meaning memory features could accidentally retain sensitive personal data or internal information from projects. And from a security standpoint, persistent memory expands the attack surface.” This is likely why the new update hasn't rolled out globally yet.


How to deal with tech tariff terror

Are you confused about what President Donald J. Trump is doing with tariffs? Join the crowd; we all are. But if you’re in charge of buying PCs for your company (because Windows 10 officially reaches end-of-life status on Oct. 14) all this confusion is quickly turning into worry. Before diving into what this all means, let’s clarify one thing: you will be paying more for your technology gear — period, end of statement. ... As Ingram Micro CEO Paul Bay said in a CRN interview: “Tariffs will be passed through from the OEMs or vendors to distribution, then from distribution out to our solution providers and ultimately to the end users.” It’s already happening. Taiwan-based computing giant Acer’s CEO, Jason Chen, recently spelled it out cleanly: “10% probably will be the default price increase because of the import tax. It’s very straightforward.” When Trump came into office, we all knew there would be a ton of tariffs coming our way, especially on Chinese products such as Lenovo computers, or products largely made in China, such as those from Apple and Dell. ... But wait! It gets even murkier. Apparently that tariff “relief” is temporary and partial. US Commerce Secretary Howard Lutnick has already said that sector-specific tariffs targeting electronics are forthcoming, “probably a month or two.” Just to keep things entertaining, Trump himself has at times contradicted his own officials about the scope and duration of the exclusions.


AI Is Essential for Business Survival but It Doesn’t Guarantee Success

Li suggests companies look at how AI is integrated across the entire value chain. "To realize business value, you need to improve the whole value chain, not just certain steps." According to her, a comprehensive value chain framework includes suppliers, employees, customers, regulators, competitors, and the broader marketplace environment. For example, Li explains that when AI is applied internally to support employees, the focus is often on boosting productivity. However, using AI in customer-facing areas directly affects the products or services being delivered, which introduces higher risk. Similarly, automating processes for efficiency could influence interactions with suppliers — raising the question of whether those suppliers are prepared to adapt. ... Speaking of organizational challenges, Li discusses how positioning AI in business and positioning AI teams in organizations is critical. Based on the organization’s level of readiness and maturity, it could have a centralized or distributed, or federated model, but the focus should be on people. Thereafter, Li reminds that the organizational governance processes are related to its people, activities, and operating model. She adds, “If you already have an investment, evaluate and adjust your investment expectations based on the exercise.”


AI Regulation Versus AI Innovation: A Fake Dichotomy

The problem is that institutionalization without or with poor regulation – and we see algorithms as institutions – tends to move in an extractive direction, undermining development. If development requires technological innovation, Acemoglu, Johnson, and Robinson taught us that inclusive institutions that are transparent, equitable, and effective are needed. In a nutshell, long-term prosperity requires democracy and its key values. We must, therefore, democratize the institutions that play such a key role in shaping our contexts of interaction by affecting individual behaviors with collective implications. The only way to make algorithms more democratic is by regulating them, i.e., by creating rules that establish key values, procedures, and practices that ought to be respected if we, as members of political communities, are to have any control over our future. Democratic regulation of algorithms demands forms of participation, revisability, protection of pluralism, struggle against exclusion, complex output accountability, and public debate, to mention a few elements. We must bring these institutions closer to democratic principles, as we have tried to do with other institutions. When we consider inclusive algorithmic institutions, the value of equality plays a crucial role—often overlapping with the principle of participation. 


The Shadow AI Surge: Study Finds 50% of Workers Use Unapproved AI Tools

The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. “Using AI at work feels like second nature for many knowledge workers now. Whether it’s summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.” If the official tools aren’t easy to access or if they feel too locked down, they’ll use whatever’s available which is often via an open tab on their browser. There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.


The Rise of the AI-Generated Fake ID

The rise of AI-generated IDs poses a serious threat to digital transactions for three key reasons.The physical and digital processes businesses use to catch fraudulent IDs are not created equal. Less sophisticated solutions may not be advanced enough to identify emerging fraud methods. With AI-generated ID images readily available on the dark web for as little as $5, ownership and usage are proliferating. IDScan.net research from 2024 demonstrated that ​​78% of consumers pointed to the misuse of AI as their core fear around identity protection. Equally, 55% believe current technology isn’t enough to protect our identities. Left unchallenged, AI fraud will damage consumer trust, purchasing behavior, and business bottom lines. Hiding behind the furor of nefarious, super-advanced AI, generating AI IDs is fairly rudimentary. Darkweb suppliers rely on PDF417 and ID image generators, using a degree of automation to match data inputs onto a contextual background. Easy-to-use tools such as Thispersondoesnotexist make it simple for anyone to cobble together a quality fake ID image and a synthetic identity. To deter potential AI-generated fake ID buyers from purchasing, the identity verification industry needs to demonstrate that our solutions are advanced enough to spot them, even as they increase in quality.


7 mistakes to avoid when turning a Raspberry Pi into a personal cloud

A Raspberry Pi may seem forgiving regarding power needs, but undervaluing its requirements can lead to sudden shutdowns and corrupted data. Cloud services that rely on a stable connection to read and write data need consistent energy for safe operation. A subpar power supply might struggle under peak usage, leading to instability or errors. Ensuring sufficient voltage and amperage is key to avoiding complications. A strong power supply reduces random reboots and performance bottlenecks. When the Pi experiences frequent resets, you risk damaging your data and your operating system’s integrity. In addition, any connected external drives might encounter file system corruption, harming stored data. Taking steps to confirm your power setup meets recommended standards goes a long way toward keeping your cloud server running reliably. ... A personal cloud server can create a false sense of security if you forget to establish a backup routine. Files stored on the Pi can be lost due to unexpected drive failures, accidents, or system corruption. Relying on a single storage device for everything contradicts the data redundancy principle. Setting up regular backups protects your data and helps you restore from mishaps with minimal downtime. Building a reliable backup process means deciding how often to copy your files and choosing safe locations to store them. 

Daily Tech Digest - March 13, 2025


Quote for the day:

"Your present circumstances don’t determine where you can go; they merely determine where you start." -- Nido Qubein


Becoming an AI-First Organization: What CIOs Must Get Right

"The three pillars of an AI-first organization are data, infrastructure and people. Data must be treated as a strategic asset with robust quality, privacy and security standards," Simha said. Along with responsible AI, responsible data management is equally crucial. When implemented effectively, data privacy, regulatory compliance, bias and security do not pose issues to an AI-first organization. Yeo described the AI-first approach as both a journey and a destination. "Just using AI tools doesn't make you AI-first. Organizations must explore AI's full potential." He compared today's AI evolution to the early days of the internet. "Decades ago, businesses knew they had to go online but didn't know how. Now, if you're not online, you're obsolete. AI is following the same trajectory - it will soon be indispensable for business success." ... Simha stressed the importance of enterprise architecture in AI deployment. "AI success depends on how well data flows across an organization. Organizations must select the right architecture patterns - real-time data processing requires a Kappa architecture, while periodic reporting benefits from a Lambda approach. A well-designed data foundation is crucial," Simha said. As AI adoption grows, ethical concerns and regulatory compliance remain critical considerations. 


From Box-Ticking to Risk-Tackling: Evolving Your GRC Beyond Audits

The problem, though, is that merely passing an audit does not necessarily mean a business is doing all it can to mitigate its risks. On their own, audits can fall short of driving full GRC maturity for several reasons ... Auditors are generally outsiders to the businesses they audit — which is good in the sense that it makes them objective evaluators. But it can also lead to situations where they have a limited understanding of what's really going on within a company's GRC practices and are beholden to the information provided by the company's team members on the other side of the assessment table. They may not ask the questions needed to gain adequate understanding to assess and find gaps, ultimately overlooking pitfalls that only insiders know about, and which would become obvious only following a higher degree of scrutiny than a standardized audit. ... But for companies that have made advanced GRC investments, such as automations that pull data from across a diverse set of disparate systems, deeper scrutiny will help validate the value that these investments have created. It may also uncover risk management weak points that the business is overlooking, allowing it to strengthen its GRC program even further. It's generally OK, by the way, if your business submits itself to a high degree of risk management scrutiny, only to fail the assessment because its controls are not as robust as it expected. 


How to use ChatGPT to write code - and my favorite trick to debug what it generates

After repeated tests, it became clear that if you ask ChatGPT to deliver a complete application, the tool will fail. A corollary to this observation is that if you know nothing about coding and want ChatGPT to build something, it will fail. Where ChatGPT succeeds -- and does so very well -- is in helping someone who already knows how to code to build specific routines and get tasks done. Don't ask for an app that runs on the menu bar. But if you ask ChatGPT for a routine to put a menu on the menu bar, and paste that into your project, the tool will do quite well. Also, remember that, while ChatGPT appears to have a tremendous amount of domain-specific knowledge (and often does), it lacks wisdom. As such, the tool may be able to write code, but it won't be able to write code containing the nuances for specific or complex problems that require deep experience. Use ChatGPT to demo techniques, write small algorithms, and produce subroutines. You can even get ChatGPT to help you break down a bigger project into chunks, and then you can ask it to help you code those chunks. ... But you can do several things to help refine your code, debug problems, and anticipate errors that might crop up. My favorite new AI-enabled trick is to feed code to a different ChatGPT session (or a different chatbot entirely) and ask, "What's wrong with this code?"


How AI-enabled ‘bossware’ is being used to track and evaluate your work

Employee monitoring tools can increase efficiency with features such as facial recognition, predictive analytics, and real-time feedback for workers, allowing them to better prioritize tasks and even prevent burnout. When AI is added, the software can be used to track activity patterns, flag unusual behavior, and analyze communication for signs of stress or dissatisfaction, according to analysts and industry experts. It also generates productivity reports, classifies activities, and detects policy violations. ... LLMs are often used in predicting employee behaviors, including the risk of quitting, unionizing, or other actions, Moradi said. However, their role is mostly in analyzing personal communications, such as emails or messages. That can be tricky, because interpreting messages across different people can lead to incorrect inferences about someone’s job performance. “If an algorithm causes someone to be laid off, legal recourse for bias or other issues with the decision-making process is unclear, and it raises important questions about accountability in algorithmic decisions,” she said. The problem, Moradi explained, is that while AI can make bossware more efficient and insightful, the data being collected by LLMs is obfuscated. “So, knowing the way that these decisions [like layoffs] are made are obscured by these, like, black boxes,” Moradi said.


Attackers Can Manipulate AI Memory to Spread Lies

By crafting a series of seemingly innocuous prompts, an attacker can insert misleading data into an AI agent's memory bank, which the model later relies on to answer unrelated queries from other users. Researchers tested Minja on three AI agents developed on top of OpenAI's GPT-4 and GPT-4o models. These include RAP, a ReAct agent with retrieval-augmented generation that integrates past interactions into future decision-making for web shops; EHRAgent, a medical AI assistant designed to answer healthcare queries; and QA Agent, a custom-built question-answering model that reasons using Chain of Thought and is augmented by memory. A Minja attack on the EHRAgent caused the model to misattribute patient records, associating one patient's data with another. In the RAP web shop experiment, a Minja attack tricked the AI into recommending the wrong product, steering users searching for toothbrushes to a purchase page for floss picks. The QA Agent fell victim to manipulated memory prompts, producing incorrect answers to multiple-choice questions based on poisoned context. Minja operates in stages. An attacker interacts with an AI agent by submitting prompts that contain misleading contextual information. Referred to as indication prompts, they appear to be legitimate but contain subtle memory-altering instructions. 


CISOs, are your medical devices secure? Attackers are watching closely

“To truly manage and prioritize risks, organizations need to look beyond technical scores and consider contextual risk factors that impact operations related to patient care. This can include identifying devices in critical care areas, legacy devices close to or past their end-of-life status, where any insecure communication protocols are, and how sensitive personal information is being stored,” Greenhalgh added. ... “For CISOs, the priority should be proactive engagement. First, implement real-time vulnerability tracking and ensure security patches can be deployed quickly without disrupting device functionality. Medical device security must be continuous—not just a checkpoint during development or regulatory submission. Second, regulatory alignment isn’t a one-time effort. The FDA now expects ongoing vulnerability monitoring, coordinated disclosure policies, and robust software patching strategies. Automating security processes—whether for SBOM (Software Bill of Materials) management, dependency tracking, or compliance reporting—reduces human error and improves response times. An SBOM is valuable not just for compliance but as a tool for tracking and mitigating vulnerabilities throughout a device’s lifecycle,” Ken Zalevsky, CEO of Vigilant Ops explained.


Can AI Teach You Empathy?

By leveraging AI-driven insights, banks can tailor their training programs to address specific skill gaps and enhance employee development. However, AI isn’t infallible, and it’s crucial for banks to implement tools that not only support learning but also foster a reliable and effective training environment. Striking the right balance between AI-driven training and human oversight ensures that these tools enhance employee growth without compromising accuracy or effectiveness. ... Experiential learning has long been a cornerstone of learning and development. Students, for example, who participate in experiential learning often develop a deeper understanding of the material and achieve statistically better outcomes than those who do not. While AI may not perfectly replicate a customer’s response, it provides new employees with a valuable opportunity to practice handling complex issues before interacting with real customers. AI-powered versions of these trainings can make it more accessible, allowing more employees to benefit. ... Many employees find it challenging to incorporate AI into their daily tasks and may need guidance to understand its value, especially in managing customer interactions. Some may also be resistant, fearing that AI could eventually replace their jobs, Huang says.


The Missing Piece in Platform Engineering: Recognizing Producers

The evolution of technology has shown us time and again that those who innovate are the ones who shape the future. Alan Kay’s words resonate strongly in the modern era, where software, artificial intelligence, and digital transformation continue to drive change across industries. ... “A Platform is a curated experience for engineers (the platform’s customers)” is a quote from the Team Topologies book. It is excellent and doesn’t contradict the platform business way of thinking, but it only calls out one side of the producer/consumer model. This is precisely the trap I fell into. When I worked with platform builders, we focused almost entirely on the application teams that consumed platform services. We rapidly became the blocker to those teams, just like the SRE and DevOps teams that came before us. We couldn’t onboard capabilities and features fast enough, meaning we were supporting the old ways while trying to build the new. ... Chris Plank, Enterprise Architect at NatWest, discusses this in our interview for his Platform Engineering Day talk: “We have since been set four challenges by leadership that I talk about: do things faster, do things simpler, enable inner sourcing, and deliver centralized capabilities in a self-service way… Our inner sourcing model will allow us to have multiple teams working on our platform… They are empowered to start contributing changes.”


Data Centers in Space: Separating Fact from Science Fiction

Among the many reasons for interest in orbital data centers is the potential for improved sustainability. However, the definition of a data center in space remains fluid, shaped by current technological limitations and evolving industry perspectives. Lonestar Data Holdings chairman and CEO Christopher Slott told Data Center Knowledge that his firm works from the definitions of a data center from industry standards bodies including the Uptime Institute and the Building Industry Consulting Service International (BICSI). ... Axiom Space plans to deploy larger ODC infrastructure in the coming years that are more similar to terrestrial data centers in terms of utility and capacity. The goal is to develop and operationalize terrestrial-grade cloud regions in low-Earth orbit (LEO). ... James noted that space presents the ultimate edge computing challenge – limited bandwidth, extreme conditions, and no room for failure. “To ensure resilience and autonomy, the platform incorporates automated rollbacks and self-healing capabilities through delta updates and health monitoring,” James said. ... With the Axiom Space deployment, the initial workloads will be small but scalable to the much larger ODC infrastructure that the company plans to deploy in the coming years. “Red Hat Device Edge enables secure, low-latency data processing directly on the ISS, allowing applications to run where the data is being generated,” James said. 


CISA cybersecurity workforce faces cuts amid shifting US strategy

Analysts suggest these layoffs and funding cuts indicate a broader strategic shift in the U.S. government’s cybersecurity approach. Neil Shah, VP at Counterpoint Research, sees both risks and opportunities in the restructuring. “In the near to mid-term, this could weaken the US cybersecurity infrastructure. However, with AI proliferating, the US government likely has a Plan B — potentially shifting toward privatized cybersecurity infrastructure projects, similar to what we’re seeing with Project Stargate for AI,” Shah said. “If these gaps aren’t filled with viable alternatives, vulnerabilities could escalate from small-scale exploits to large-scale cyber incidents at state or federal levels. Signs point to a broader cybersecurity strategy reboot, with funding likely being redirected toward more efficient and sophisticated players rather than a purely vertical, government-led approach.” While some fear heightened risks, others argue the shift could lead to more tech-driven solutions. Faisal Kawoosa, founder and lead analyst at Techarc, views the move as part of a larger digital transformation. “Elon Musk’s role is not just about cost-cutting but also about leveraging technology to create more efficient systems,” Kawoosa said. “DOGE operates as a digital transformation program for US governance, exploring tech-first approaches to achieving similar or better results.”

Daily Tech Digest - January 27, 2025


Quote for the day:

"Your problem isn't the problem. Your reaction is the problem." -- Anonymous


Revolutionizing Investigations: The Impact of AI in Digital Forensics

One of the most significant challenges in modern digital forensics, both in the corporate sector and law enforcement, is the abundance of data. Due to increasing digital storage capacities, even mobile devices today can accumulate up to 1TB of information. ... Digital forensics started benefiting from AI features a few years ago. The first major development in this regard was the implementation of neural networks for picture recognition and categorization. This powerful tool has been instrumental for forensic examiners in law enforcement, enabling them to analyze pictures from CCTV and seized devices more efficiently. It significantly accelerated the identification of persons of interest and child abuse victims as well as the detection of case-related content, such as firearms or pornography. ... No matter how advanced, AI operates within the boundaries of its training, which can sometimes be incomplete or imperfect. Large language models, in particular, may produce inaccurate information if their training data lacks sufficient detail on a given topic. As a result, investigations involving AI technologies require human oversight. In DFIR, validating discovered evidence is standard practice. It is common to use multiple digital forensics tools to verify extracted data and manually check critical details in source files. 


Is banning ransomware payments key to fighting cybercrime?

Implementing a payment ban is not without challenges. In the short term, retaliatory attacks are a real possibility as cybercriminals attempt to undermine the policy. However, given the prevalence of targets worldwide, I believe most criminal gangs will simply focus their efforts elsewhere. The government’s resolve would certainly be tested if payment of a ransom was seen as the only way to avoid public health data being leaked, energy networks being crippled, or preventing a CNI organization from going out of business. In such cases, clear guidelines as well as technical and financial support mechanisms for affected organizations are essential. Policy makers must develop playbooks for such scenarios and run education campaigns that raise awareness about the policy’s goals, emphasizing the long-term benefits of standing firm against ransom demands. That said, increased resilience—both technological and organizational—are integral to any strategy. Enhanced cybersecurity measures are critical, in particular a zero trust strategy that reduces an organization’s attack surface and stops hackers from being able to move laterally in the network. The U.S. federal government has already committed to move to zero trust architectures.


Building a Data-Driven Culture: Four Key Elements

Why is building a data-driven culture incredibly hard? Because it calls for a behavioral change across the organization. This work is neither easy nor quick. To better appreciate the scope of this challenge, let’s do a brief thought exercise. Take a moment to reflect on these questions: How involved are your leaders in championing and directly following through on data-driven initiatives? Do you know whether your internal stakeholders are all equipped and empowered to use data for all kinds of decisions, strategic or tactical? Does your work environment make it easy for people to come together, collaborate with data, and support one another when they’re making decisions based on the insights? Does everyone in the organization truly understand the benefits of using data, and are success stories regularly shared internally to inspire people to action? If your answers to these questions are “I’m not sure” or “maybe,” you’re not alone. Most leaders assume in good faith that their organizations are on the right path. But they struggle when asked for concrete examples or data-backed evidence to support these gut-feeling assumptions. The leaders’ dilemma becomes even more clear when you consider that the elements at the core of the four questions above — leadership intervention, data empowerment, collaboration, and value realization — are inherently qualitative. Most organizational metrics or operational KPIs don’t capture them today. 


How CIOs Should Prepare for Product-Led Paradigm Shift

Scaling product centricity in an organization is like walking a tightrope. Leaders must drive change while maintaining smooth operations. This requires forming cross-functional teams, outcome-based evaluation and navigating multiple operating models. As a CIO, balancing change while facing the internal resistance of a risk-averse, siloed business culture can feel like facing a strong wind on a high wire. ...The key to overcoming this is to demonstrate the benefits of a product-centric approach incrementally, proving its value until it becomes the norm. To prevent cultural resistance from derailing your vision for a more agile enterprise, leverage multiple IT operating models with a service or value orientation to meet the ambitious expectations of CEOs and boards. Engage the C-suite by taking a holistic view of how democratized IT can be used to meet stakeholder expectations. Every organization has a business and enterprise operating model to create and deliver value. A business model might focus on manufacturing products that delight customers, requiring the IT operating model to align with enterprise expectations. This alignment involves deciding whether IT will merely provide enabling services or actively partner in delivering external products and services.


CISOs gain greater influence in corporate boardrooms

"As the role of the CISO grows more complex and critical to organisations, CISOs must be able to balance security needs with business goals, culture, and articulate the value of security investments." She highlights the importance of strong relationships across departments and stakeholders in bolstering cybersecurity and privacy programmes. The study further discusses the positive impact of having board members with a cybersecurity background. These members foster stronger relationships with security teams and have more confidence in their organisation's security stance. For instance, boards with a CISO member report higher effectiveness in setting strategic cybersecurity goals and communicating progress, compared to boards without such expertise. CISOs with robust board relationships report improved collaboration with IT operations and engineering, allowing them to explore advanced technologies like generative AI for enhanced threat detection and response. However, gaps persist in priority alignment between CISOs and boards, particularly around emerging technologies, upskilling, and revenue growth. Expectations for CISOs to develop leadership skills add complexity to their role, with many recognising a gap in business acumen, emotional intelligence, and communication. 


Researchers claim Linux kernel tweak could reduce data center energy use by 30%

Researchers at the University of Waterloo's Cheriton School of Computer Science, led by Professor Martin Karsten and including Peter Cai, identified inefficiencies in network traffic processing for communications-heavy server applications. Their solution, which involves rearranging operations within the Linux networking stack, has shown improvements in both performance and energy efficiency. The modification, presented at an industry conference, increases throughput by up to 45 percent in certain situations without compromising tail latency. Professor Karsten likened the improvement to optimizing a manufacturing plant's pipeline, resulting in more efficient use of data center CPU caches. Professor Karsten collaborated with Joe Damato, a distinguished engineer at Fastly, to develop a non-intrusive kernel change consisting of just 30 lines of code. This small but impactful modification has the potential to reduce energy consumption in critical data center operations by as much as 30 percent. Central to this innovation is a feature called IRQ (interrupt request) suspension, which balances CPU power usage with efficient data processing. By reducing unnecessary CPU interruptions during high-traffic periods, the feature enhances network performance while maintaining low latency during quieter times.


GitHub Desktop Vulnerability Risks Credential Leaks via Malicious Remote URLs

While the credential helper is designed to return a message containing the credentials that are separated by the newline control character ("\n"), the research found that GitHub Desktop is susceptible to a case of carriage return ("\r") smuggling whereby injecting the character into a crafted URL can leak the credentials to an attacker-controlled host. "Using a maliciously crafted URL it's possible to cause the credential request coming from Git to be misinterpreted by Github Desktop such that it will send credentials for a different host than the host that Git is currently communicating with thereby allowing for secret exfiltration," GitHub said in an advisory. A similar weakness has also been identified in the Git Credential Manager NuGet package, allowing for credentials to be exposed to an unrelated host. ... "While both enterprise-related variables are not common, the CODESPACES environment variable is always set to true when running on GitHub Codespaces," Ry0taK said. "So, cloning a malicious repository on GitHub Codespaces using GitHub CLI will always leak the access token to the attacker's hosts." ... In response to the disclosures, the credential leakage stemming from carriage return smuggling has been treated by the Git project as a standalone vulnerability (CVE-2024-52006, CVSS score: 2.1) and addressed in version v2.48.1.


The No-Code Dream: How to Build Solutions Your Customer Truly Needs

What's excellent about no-code is that you can build a platform that won't require your customers to be development professionals — but will allow customization. That's the best approach: create a blank canvas for people, and they will take it from there. Whether it's surveys, invoices, employee records, or something completely different, developers have the tools to make it visually appealing to your customers, making it more intuitive for them. I also want to break the myth that no code doesn't allow effective data management. It is possible to create a no-code platform that will empower users to perform complex mathematical operations seamlessly and to support managing interrelated data. This means users' applications will be more robust than their competitors and produce more meaningful insights. ... As a developer, I am passionate about evolving tech and our industry's challenges. I am also highly aware of people's concerns over the security of many no-code solutions. Security is a critical component of any software; no-code solutions are no exception. One-off custom software builds do not typically undergo the same rigorous security testing as widely used commercial software due to the high cost and time involved. This leaves them vulnerable to security breaches.


Digital Operations at Turning Point as Security and Skills Concerns Mount

The development of appropriate skills and capabilities has emerged as a critical challenge, ranking as a pressing concern in advancing digital operations. The talent shortage is most acute in North America and the media industry, where fierce competition for skilled professionals coincides with accelerating digital transformation initiatives. Organizations face a dual challenge: upskilling existing staff while competing for scarce talent in an increasingly competitive market. The report suggests this skills gap could potentially slow the adoption of new technologies and hamper operational advancement if not adequately addressed. "The rapid evolution of how AI is being applied to many parts of jobs to be done is unmatched," Armandpour said. "Raising awareness, educating, and fostering a rich learning environment for all employees is essential." ... "Service outages today can have a much greater impact due to the interdependencies of modern IT architectures, so security is especially critical," Armandpour said. "Organizations need to recognize security as a critical business imperative that helps power operational resilience, customer trust, and competitive advantage." What sets successful organizations apart is the prioritization of defining robust security requirements upfront and incorporating security-by-design into product development cycles. 


Is ChatGPT making us stupid?

In fact, one big risk right now is how dependent developers are becoming on LLMs to do their thinking for them. I’ve argued that LLMs help senior developers more than junior developers, precisely because more experienced developers know when an LLM-driven coding assistant is getting things wrong. They use the LLM to speed up development without abdicating responsibility for that development. Junior developers can be more prone to trusting LLM output too much and don’t know when they’re being given good code or bad. Even for experienced engineers, however, there’s a risk of entrusting the LLM to do too much. For example, Mike Loukides of O’Reilly Media went through their learning platform data and found developers show “less interest in learning about programming languages,” perhaps because developers may be too “willing to let AI ‘learn’ the details of languages and libraries for them.” He continues, “If someone is using AI to avoid learning the hard concepts—like solving a problem by dividing it into smaller pieces (like quicksort)—they are shortchanging themselves.” Short-term thinking can yield long-term problems. As noted above, more experienced developers can use LLMs more effectively because of experience. If a developer offloads learning for quick-fix code completion at the long-term cost of understanding their code, that’s a gift that will keep on taking.