Showing posts with label Shadow IT. Show all posts
Showing posts with label Shadow IT. Show all posts

Daily Tech Digest - December 24, 2025


Quote for the day:

"The only person you are destined to become is the person you decide to be." -- Ralph Waldo Emerson



When is an AI agent not really an agent?

If you believe today’s marketing, everything is an “AI agent.” A basic workflow worker? An agent. A single large language model (LLM) behind a thin UI wrapper? An agent. A smarter chatbot with a few tools integrated? Definitely an agent. The issue isn’t that these systems are useless. Many are valuable. The problem is that calling almost anything an agent blurs an important architectural and risk distinction. ... If a vendor knows its system is mainly a deterministic workflow plus LLM calls but markets it as an autonomous, goal-seeking agent, buyers are misled not just about branding but also about the system’s actual behavior and risk. That type of misrepresentation creates very real consequences. Executives may assume they are buying capabilities that can operate with minimal human oversight when, in reality, they are procuring brittle systems that will require substantial supervision and rework. Boards may approve investments on the belief that they are leaping ahead in AI maturity, when they are really just building another layer of technical and operational debt. Risk, compliance, and security teams may under-specify controls because they misunderstand what the system can and cannot do. ... demand evidence instead of demos. Polished demos are easy to fake, but architecture diagrams, evaluation methods, failure modes, and documented limitations are harder to counterfeit. If a vendor can’t clearly explain how their agents reason, plan, act, and recover, that should raise suspicion. 


Five identity-driven shifts reshaping enterprise security in 2026

Organizations that continue to treat identity as a static access problem will fall behind attackers who exploit AI-powered automation, credential abuse, and identity sprawl. The enterprises that succeed will be those that re-architect identity security as a continuous, data-aware control plane, one built to govern humans, machines, and AI with the same rigor, visibility, and accountability. ... Unlike traditional shadow IT, shadow AI is both more powerful and more dangerous. Employees can deploy advanced models trained on sensitive company data, and these tools often store or transmit privileged credentials, API keys, and service tokens without oversight. Even sanctioned AI tools become risky when improperly configured or connected to internal workflows. ... With AI-driven automation, sophisticated playbooks previously reserved for top-tier nation-states become accessible to countries, and non-state actors, with far fewer resources. This levels the playing field and expands the number of threat actors capable of meaningful, identity-focused cyber aggression. In 2026, expect more geopolitical disruptions driven by identity warfare, synthetic information, and AI-enabled critical infrastructure targeting. ... Machine identities have become the primary source of privilege misuse, and their growth shows no sign of slowing. As AI-driven automation accelerates and IoT ecosystems proliferate, organizations will hit a governance tipping point.2026 will force security teams to confront a tough reality. Identity-first security can’t stop with humans. 


Implementing NIS2 — without getting bogged down in red tape

NIS2 essentially requires three things: concrete security measures; processes and guidelines for managing these measures; and robust evidence that they work in practice. ... Therefore, two levels are crucial for NIS2: the technical measures and the evidence that they are effective. This is precisely where the transformation of recent years becomes apparent. Previously, concepts, measures, and specifications for software and IT infrastructures were predominantly documented in text form. ... The second area that NIS2 and the new Implementing Regulation 2024/2690 for digital services are enshrining in law is vulnerability management in the company’s own code and supply chain. This requires regular vulnerability scans, procedures for assessment and prioritization, timely remediation of critical vulnerabilities, and regulated vulnerability handling and — where necessary — coordinated vulnerability disclosure. Cloud and SaaS providers also face additional supply chain obligations ... The third area where NIS2 quickly becomes a paper tiger is the combination of monitoring, incident response, and the new reporting requirements. The directive sets clear deadlines: early warning within 24 hours, a structured report after 72 hours, and a final report no later than one month. ... NIS2 forces companies to explicitly define their security measures, processes, and documentation. This is inconvenient — ​​especially for organizations that have previously operated largely on an ad-hoc basis. 


Rethinking Anomaly Detection for Resilient Enterprise IT

Being armed with this knowledge is only the first step, though. The next challenge is detecting anomalies consistently and accurately in complex environments. This task is becoming increasingly difficult as IT environments undergo continuous digital transformation, shift towards hybrid-cloud setups, and rely on legacy systems that are well past their prime. These challenges introduce dynamic data, pushing IT leaders to rethink their anomaly detection processes. ... By incorporating seasonal patterns, user behavior, and workload types, adaptive baselines filter out the noise and highlight genuine deviations. Another factor to integrate is the overall context of a situation. Metrics rarely operate in isolation. During planned deployment, it would be anticipated for a spike in network latency. This same spike would be seen completely differently if it were to occur during steady operations. By combining telemetry with contextual signals, anomaly detection systems can separate the expected from the unexpected. ... Anomaly detection is meant to strengthen operations and improve overall resilience. However, it is not capable of delivering on this promise when teams are constantly swimming through the seas of generated alerts. By contextually and comprehensively adopting new approaches to the variety of anomalies, systems can identify root causes, uniformly correct systemic failures created from multiple metrics points, and mitigate the risk of outages.


Bridging the Gap: Engineering Resilience in Hybrid Environments (DR, Failover, and Chaos)

Resilience in a hybrid environment isn't just about preventing failure; it’s about enduring it. It requires moving beyond hope as a strategy and embracing a tripartite approach: Robust Disaster Recovery (DR), automated Failover, and proactive Chaos Engineering. ... Disaster Recovery is your insurance policy for catastrophic events. It is the process of regaining access to data and infrastructure after a significant outage—a hurricane hitting your primary data center, a massive ransomware attack, or a prolonged regional cloud failure. ... While DR handles catastrophes, Failover handles the everyday hiccups. Failover is the (ideally automatic) process of switching to a redundant or standby system upon the failure of the primary system, mostly automatic. Failover mechanisms in a hybrid environment ensure immediate operational continuity by automatically switching workloads from a failed primary system (on-premises or cloud) to a redundant secondary system with minimal downtime. This requires coordinating recovery across cloud and on-premises platforms. ... Chaos engineering is a proactive discipline used to stress-test systems by intentionally introducing controlled failures to identify weaknesses and build resilience. In hybrid environments—which combine on-premises infrastructure with cloud resources—this practice is essential for navigating the added complexity and ensuring continuous reliability across diverse platforms.


Should CIOs rethink the IT roadmap?

As technology consultancy West Monroe states: “You don’t need bigger plans — you need faster moves.” This is a fitting mantra for IT roadmap development today. CIOs should ask themselves where the most likely business and technology plan disrupters are going to come from. ... Understandably, CIOs can only develop future-facing technology roadmaps with what they see at a present point in time. However, they do have the ability to improve the quality of their roadmaps by reviewing and revising these plans more often. ... CIOs should revisit IT roadmaps quarterly at a minimum. If roadmaps must be altered, CIOs should communicate to their CEOs, boards, and C-level peers what’s happening and why. In this way, no one will be surprised when adjustments must be made. As CIOs get more engaged with lines of business, they can also show how technology changes are going to affect company operations and finances before these changes happen ... Equally important is emphasizing that a seismic change in technology roadmap direction could impact budgets. For instance, if AI-driven security threats begin to impact company AI and general systems, IT will need AI-ready tools and skills to defend and to mitigate these threats. ... Now is the time for CIOs to transform the IT roadmap into a more malleable and responsive document that can accommodate the disruptive changes in business and technology that companies are likely to experience.


Why shadow IT is a growing security concern for data centre teams

It is essential to recognise that employees use shadow IT to get their work done efficiently, not to deliberately create security risks. This should be front of mind for any IT teams and data centre consultants involved in infrastructure design and security provision. Finding blame or taking an approach that blocks everything does not work. A more effective way to address shadow IT use is to invest for the long term in a culture which promotes IT as a partner to workplace productivity, not something which is a hindrance. Ideally, this demands buy-in from senior management. Although it falls to IT teams to provide people with the tools for their jobs, providing choice, listening to employees’ requests and offering prompt solutions, will encourage the transparency so much needed for IT to analyse usage patterns, identify potential issues and address minor issues before they grow into costly problems. Importantly, this goes a long way towards embracing new technologies and avoiding employees turning to shadow IT that they find and use without approval. ... While IT teams are focused on gaining visibility and control over the software, hardware and services gainfully used by their organisations, they also need to be careful not to stifle innovation. It is here that data centre operators can share ideas on ways to best achieve this balance, as there is never going to be one model that suits every business. 


From Digitalization to Intelligence: How AI Is Redefining Enterprise Workflows

In the AI economy, digitalization plays another important role—turning paper documents into data suitable for LLM engines. This will become increasingly important as more sites restrict crawlers or require licensing, which reduces the usable pool of data. A 2024 report from the nonprofit watchdog Epoch AI projected that large language models (LLMs) could run out of fresh, human-generated training data as soon as 2026. Companies that rely purely on publicly available crawl data for continuous scaling likely will encounter diminishing returns. To avoid the looming publicly accessed data shortage, enterprises will need to use their digitized documents and corporate data to fine‐tune models for domain specific tasks rather than rely only on generic web data. Intelligent capture technologies can now recognize document types, extract key entities, and validate information automatically. Once digitized, this data flows directly into enterprise systems where AI models can uncover insights or predict outcomes. ... Automation isn’t just about doing more with less; it’s about learning from every action. Each scan, transaction, or decision strengthens the feedback loop that powers enterprise AI systems. The organizations recognizing this shift early will outpace competitors that still treat data capture as a back-office function. The winners will be those that turn the last mile of digitalization into the first mile of intelligence.


Boardrooms demand tougher AI returns & stronger data

Budget scrutiny is increasing as wider economic conditions remain uncertain and as organisations review early generative AI experiments. "AI investment is no longer about FOMO. Boards and CFOs want answers about what's working, where it's paying off, and why it matters now. 2026 will be a year of focus. Flashy experiments and perpetual pilots will lose funding. Projects that deliver measurable outcomes will move to the center of the roadmap," said McKee, CEO, Ataccama. ... "For years people have predicted that AI will hollow out data teams, yet the closer you get to real deployments, the harder that story is to believe. Once agents take over the repetitive work of querying, cleaning, documenting, and validating data, the cost of generating an insight will begin falling toward zero. And when the cost of something useful drops, demand rises. We've seen this pattern with steam engines, banking, spreadsheets, and cloud compute, and data will follow the same curve," said Keyser. Keyser said easier access to data and analysis is likely to change behaviours in business units that have not traditionally engaged with central data groups. He expects a rise in AI-literate staff across operational functions and a larger need for oversight. ... The organizations that adopt agents will discover something counterintuitive. They won't end up with fewer data workers, but more. This is Jevons paradox applied to analytics. When insight becomes easier, curiosity will expand and decision-making will accelerate.


The Blind Spots Created by Shadow AI Are Bigger Than You Think

If you think it’s the same as the old “shadow IT” problem with different branding, you’re wrong. Shadow AI is faster, harder to detect, and far more entangled with your intellectual property and data flows than any consumer SaaS tool ever was. ... Shadow AI is not malicious in nature; in fact, the intent is almost always to improve productivity or convenience. Unfortunately, the impact is a major increase in unplanned data exposure, untracked model interactions, and blind spots across your attack surface. ... Most AI tools don’t clearly explain how long they keep your data. Some retrain on what you enter, others store prompts forever for debugging, and a few had almost no limits at all. That means your sensitive info could be copied, stored, reused for training, or even show up later to people it shouldn’t. Ask Samsung, whose internal code found its way into a public model’s responses after an engineer uploaded it. They banned AI instantly. Hardly the most strategic solution, and definitely not the last time you’ll see this happen. ... Shadow AI bypasses Identity controls, DLP controls, SASE boundaries, Cloud logging, and Sanctioned inference gateways. All that “AI data exhaust” ends up scattered across a slew of unsanctioned tools and locations. Your exposure assessments are, by default, incomplete because you can’t protect what you can’t see. ... Shadow AI has changed from an occasional or unusual instance case to everyday behavior happening across all departments.

Daily Tech Digest - November 03, 2025


Quote for the day:

"With the new day comes new strength and new thoughts." -- Eleanor Roosevelt


Smaller, Smarter, Faster: AI Will Scale Differently in 2026

"Technology leaders face a pivotal year in 2026, where disruption, innovation and risk are expanding at unprecedented speed," said Gene Alvarez, distinguished vice president analyst at Gartner. "The top strategic technology trends identified for 2026 are tightly interwoven and reflect the realities of an AI-powered, hyperconnected world where organizations must drive responsible innovation, operational excellence and digital trust." The centerpiece of that thesis is the pivot from large, general-purpose LLMs to domain-specific language models, or DSLMs, and modular multiagent systems, MAS, designed to execute and audit business workflows. DSLMs promise higher accuracy, lower downstream compliance risk and cheaper inference costs; MAS promise orchestration and scale. ... The back half of Gartner's report is a sober reminder of the price of admission. First is geopatriation. This is the C-suite-level trend of yanking critical data and apps out of global public clouds and moving them to local or "sovereign" clouds. Driven by regulations like Europe's GDPR and fears over the US CLOUD Act, this market is exploding. Second, the security model is flipping. Gartner's Preemptive Cybersecurity trend predicts a massive shift, forecasting that 50% of IT security spending will move from "detection and response" to "proactive protection" by 2030, up from less than 5% in 2024. 


Today’s security leaders must adopt an asymmetric mindset

We’ve built an unbalanced view of threats. We pour resources into the risks we know how to manage — firewalls, access control, guard contracts — while neglecting the ones that move fastest and cut deepest: hybrid, cross-domain, and narrative-driven threats. Consider the Salt Typhoon campaign in 2024. State-linked actors compromised multiple U.S. telecom networks for nearly a year, breaching routers, core systems, and even National Guard networks. What began as a cyber incident rippled across national security. Or, the hybrid criminal case in which a fake recruiter on LinkedIn lured a corporate employee into downloading malware while coordinating physical intimidation. Digital, physical, and psychological tactics in one operation. ... Asymmetric actors win by exploiting tempo, surprise, and blind spots. As the former U.S. Army Asymmetric Warfare Group explained, its mission was to “identify critical asymmetric threats… through global first-hand observations,” enabling rapid adaptation in a shifting threat environment. That’s the same level of insight security leaders should demand whether from small teams or entire corporations. They don’t respect our categories. They will hit us digitally, physically, and reputationally in whatever sequence maximizes confusion and slows our response. They’ll use low-cost tools to cause high-cost damage: small moves, outsized effects.


Employees keep finding new ways around company access controls

AI, SaaS, and personal devices are changing how people get work done, but the tools that protect company systems have not kept up, according to 1Password. Tools like SSO, MDM, and IAM no longer align with how employees and AI agents access data. The result is what researchers call the “access-trust gap,” a growing distance between what organizations think they can control and how employees and AI systems access company data. The survey tracks four areas where this gap is widening: AI governance, SaaS and shadow IT, credentials, and endpoint security. Each shows the same pattern of rapid adoption and limited oversight. ... Organizations now rely on hundreds of cloud apps, most outside IT’s visibility. Over half of employees admit they have downloaded work tools without permission, often because approved options are slower or lack needed features. This behavior drives SaaS sprawl. 70% of security professionals say SSO tools are not a complete solution for securing identities. On average, only about two-thirds of enterprise apps sit behind SSO, leaving a large portion unmanaged. Offboarding gaps make the problem worse. 38% of employees say they have accessed a former employer’s account or data after leaving the company. ... Mobile Device Management remains the default control for company hardware, but security leaders see its limits. MDM tools do not adequately safeguard managed devices or ensure compliance.


Securing APIs at Scale: Threats, Testing, and Governance

API security must be approached as a fundamental element of the design and development process, rather than an afterthought or add-on. Many organizations fall short in this regard, assuming that security measures can be patched onto an existing system by deploying security devices like Web Application Firewall (WAF) at the perimeter. In reality, secure APIs begin with the first line of code, integrating security controls throughout the design lifecycle. Even minor security gaps can result in significant economic losses, legal repercussions, and long-term brand damage. Designing APIs with inadequate security practices introduces risks that compound over time, often becoming a time bomb for organizations. ... APIs are attractive targets for attackers because they expose business logic, data flows, and authentication mechanisms. According to Salt Security, 94% of organizations experienced an API-related security incident in the past year. The threats facing APIs are constantly evolving, becoming more sophisticated and targeted. ... Given the complexity and scale of API ecosystems, a proactive and comprehensive testing strategy is crucial. Relying solely on manual testing is no longer sufficient; automation is key. ... Technical controls are vital, but without a strong governance framework, API security efforts can quickly unravel. Without governance, APIs become a “wild west” of inconsistent standards, duplicated efforts, and accidental exposure. 


The Agentic evolution, How Autonomous AI is Re-Architecting the Enterprise

The rise of Agentic AI is leading to a new kind of enterprise that functions more like a living system. In this model, AI agents and humans work together as collaborators. The agents handle ongoing operations and optimize outcomes, while humans provide strategy, creativity, and oversight. Organizations that can successfully combine human intelligence with machine autonomy will lead the next era of business transformation. They will move faster, adapt quicker, and make better use of their data and resources. The Agentic Leap is not only about new technology; it represents a deeper change in how enterprises think and operate. It marks the beginning of organizations that are not only supported by AI but are actively driven and shaped by it. This traditional hierarchy of command is gradually evolving into a network of intelligent collaboration, where humans and AI systems continuously exchange information, refine strategies, and act with shared intent. In this model, humans and AI agents function as true partners. Agents operate as intelligent executors and problem-solvers, constantly monitoring data flows, identifying opportunities, and adapting operations in real time. They can handle repetitive, data-intensive tasks, freeing humans to focus on higher-order functions such as strategic planning, creative innovation, and ethical oversight. Humans, in turn, provide contextual understanding, emotional intelligence, and long-term vision qualities that anchor AI-driven actions in purpose and responsibility.


6 essential rules for unleashing AI on your software development process - and the No. 1 risk

"AI is not something you can pull out of your toolbox and expect magical things to happen," cautioned Andrew Kum-Seun, research director at Info-Tech Research Group. "At least, not right now. IT managers must be prepared to address the human, workflow, and technical implications that naturally come with AI while being honest about what AI can do today for their organization." In other words, get your AI implementation in order before you attempt to apply it to getting your software development in order. ... As Agile is meant to maintain humanity in software development, AI needs to support this vision. This must be a core component of AI-driven Agile development as well. "If leaders are unable to bridge their intent for AI with the team's concerns, they will likely see improper use of AI and, perhaps, deliberate sabotage in its implementation," said Kum-Seun. Another important step is to "keep all AI explainable by ensuring the use of AI tools that clearly cite where their suggestions come from -- no black-box code that cannot be simply verified," said Sopuch. "Human oversight is a required step. AI can write and refactor code, but humans absolutely must approve merges, product pushes, or any exceptions. Everything in the process must be logged, including prompts, outputs, and approvals so that an audit can easily take place on demand."


The AWS outage post-mortem is more revealing in what it doesn’t say

When AWS suffered a series of cascading failures that crashed its systems for hours in late October, the industry was once again reminded of its extreme dependence on major hyperscalers. The incident also shed an uncomfortable light on how fragile these massive environments have become. In Amazon’s detailed post-mortem report, the cloud giant detailed a vast array of delicate systems that keeps global operations functioning — at least, most of the time. ... “The outage exposed how deeply interdependent and fragile our systems have become. It doesn’t provide any confidence that it won’t happen again. ‘Improved safeguards’ and ‘better change management’ sound like procedural fixes, but they’re not proof of architectural resilience. If AWS wants to win back enterprise confidence, it needs to show hard evidence that one regional incident can’t cascade across its global network again. Right now, customers still carry most of that risk themselves.” ... Ellis agreed with others that AWS didn’t detail why this cascading failure happened on that day, which makes it difficult for enterprise IT executives to have high confidence that something similar won’t happen in a month. “They talked about what things failed and not what caused the failure. Typically, failures like this are caused by a change in the environment. Someone wrote a script and it changed something or they hit a threshold. It could have been as simple as a disk failure in one of the nodes. I tend to think it’s a scaling problem.”


Five Real-World Ways AI Can Boost Your Bank’s Operations

Use of artificial intelligence decisioning has already had time to prove itself, and the results have been strong, according to Daryl Jones, senior director. The fit varies from one institution to another, "but the lift, overall, has been unquestionable," said Jones. He said institutions using AI in lending decisions have generally seen healthy increases in approvals, with solid results. One caveat is that as aspects of loan decisions transition to AI, institutions have to be careful how human lenders influence the software development process. ... Technology has long been a mainstay for antifraud, according to John Meyer, managing director. "We’ve had machine learning algorithms since the 1990s," said Meyer, but today’s antifraud applications of AI go a step beyond. He explained that the old technology could evaluate a few data points "on day two," once the damage was already done. By contrast, AI-based techniques can screen and surface instances truly needing human evaluation, according to Meyer. Such applications include verifying that paper checks are genuine. Meyer noted that check fraud remains a significant issue for the banking industry in spite of the rise of digital transactions. ... Even in a modern banking office, documents can be a rat’s nest. "We had a client on the West Coast that wanted to centralize all of its operational documents," said Clio Silman, managing director. 


Context engineering: Improving AI by moving beyond the prompt

It isn’t a new practice for developers of AI models to ingest various sources of information to train their tools to provide the best outputs, notes Neeraj Abhyankar, vice president of data and AI at R Systems, a digital product engineering firm. He defines the recently coined term context engineering as a strategic capability that shapes how AI systems interact with the broader enterprise. ... Context engineering will be critical for autonomous agents trusted to perform complex tasks on an organization’s behalf without errors, he adds. ... Context engineering is an “architectural shift” in how AI systems are built, adds Louis Landry, CTO at data analytics firm Teradata. “Early generative AI was stateless, handling isolated interactions where prompt engineering was sufficient,” he says. “However, autonomous agents are fundamentally different. They persist across multiple interactions, make sequential decisions, and operate with varying levels of human oversight.” He suggests that AI users are moving away from the approach of, “How do I ask this AI a question?” to “How do I build systems that continuously supply agents with the right operational context?” “The shift is toward context-aware agent architectures, especially as we move from simple task-based agents to autonomous agentic systems that make decisions, chain together complex workflows, and operate independently,” Landry adds.


India’s Search for Digital Sovereignty

states are seeking to impose varying degrees of control over the internet. Often, these manifest as restrictions on information flows, which have consequences for civil liberties such as speech, expression, dissent, and the exchange of ideas in society. And, in a time when both geopolitical and domestic actors, state and non-state alike, cynically exploit open societies to exacerbate polarization and dehumanization, calls for greater control might seem appealing. However, it is vital that attempts to curb the concentration of power and resources of one set of actors do not merely transfer those same powers to another set. On the contrary, the goal should be to dissipate dominance, in general. ... It is not that alternative pathways to reduce concentration do not exist. Free and open source software, though not without its own challenges, is an approach that many can choose. Kailash Nadh, one of the founders of the FOSS United Foundation, has argued that for India to achieve technological self-determination, it needed to “publicly acknowledge” FOSS, and invest “time, effort and resources into” it. In late August, perhaps in a nod to the Microsoft-Nayara situation, LibreOffice positioned itself as a “Strategic Asset for Governments and Enterprises Focused on Digital Sovereignty and Privacy.” When it comes to information distribution and consumption, decentralized social networks and ideas such as “middleware” have existed for several years, but have yet to gain traction in India’s policy discourse.

Daily Tech Digest - August 04, 2025


Quote for the day:

"You don’t have to be great to start, but you have to start to be great." — Zig Ziglar


Why tomorrow’s best devs won’t just code — they’ll curate, coordinate and command AI

It is not just about writing code anymore — it is about understanding systems, structuring problems and working alongside AI like a team member. That is a tall order. That said, I do believe that there is a way forward. It starts by changing the way we learn. If you are just starting out, avoid relying on AI to get things done. It is tempting, sure, but in the long run, it is also harmful. If you skip the manual practice, you are missing out on building a deeper understanding of how software really works. That understanding is critical if you want to grow into the kind of developer who can lead, architect and guide AI instead of being replaced by it. ... AI-augmented developers will replace large teams that used to be necessary to move a project forward. In terms of efficiency, there is a lot to celebrate about this change — reduced communication time, faster results and higher bars for what one person can realistically accomplish. But, of course, this does not mean teams will disappear altogether. It is just that the structure will change. ... Being technically fluent will still remain a crucial requirement — but it won’t be enough to simply know how to code. You will need to understand product thinking, user needs and how to manage AI’s output. It will be more about system design and strategic vision. For some, this may sound intimidating, but for others, it will also open many doors. People with creativity and a knack for problem-solving will have huge opportunities ahead of them.


The Wild West of Shadow IT

From copy to deck generators, code assistants, and data crunchers, most of them were never reviewed or approved. The productivity gains of AI are huge. Productivity has been catapulted forward in every department and across every vertical. So what could go wrong? Oh, just sensitive data leaks, uncontrolled API connections, persistent OAuth tokens, and no monitoring, audit logs, or privacy policies… and that's just to name a few of the very real and dangerous issues. ... Modern SaaS stacks form an interconnected ecosystem. Applications integrate with each other through OAuth tokens, API keys, and third-party plug-ins to automate workflows and enable productivity. But every integration is a potential entry point — and attackers know it. Compromising a lesser-known SaaS tool with broad integration permissions can serve as a stepping stone into more critical systems. Shadow integrations, unvetted AI tools, and abandoned apps connected via OAuth can create a fragmented, risky supply chain.  ... Let's be honest - compliance has become a jungle due to IT democratization. From GDPR to SOC 2… your organization's compliance is hard to gauge when your employees use hundreds of SaaS tools and your data is scattered across more AI apps than you even know about. You have two compliance challenges on the table: You need to make sure the apps in your stack are compliant and you also need to assure that your environment is under control should an audit take place.


Edge Computing: Not Just for Tech Giants Anymore

A resilient local edge infrastructure significantly enhances the availability and reliability of enterprise digital shopfloor operations by providing powerful on-premises processing as close to the data source as possible—ensuring uninterrupted operations while avoiding external cloud dependency. For businesses, this translates to improved production floor performance and increased uptime—both critical in sectors such as manufacturing, healthcare, and energy. In today’s hyperconnected market, where customers expect seamless digital interactions around the clock, any delay or downtime can lead to lost revenue and reputational damage. Moreover, as AI, IoT, and real-time analytics continue to grow, on-premises OT edge infrastructure combined with industrial-grade connectivity such as private 4.9/LTE or 5G provides the necessary low-latency platform to support these emerging technologies. Investing in resilient infrastructure is no longer optional, it’s a strategic imperative for organisations seeking to maintain operational continuity, foster innovation, and stay ahead of competitors in an increasingly digital and dynamic global economy. ... Once, infrastructure decisions were dominated by IT and boiled down to a simple choice between public and private infrastructure. Today, with IT/OT convergence, it’s all about fit-for-purpose architecture. On-premises edge computing doesn’t replace the cloud — it complements it in powerful ways.


A Reporting Breakthrough: Advanced Reporting Architecture

Advanced Reporting Architecture is based on a powerful and scalable SaaS architecture, which efficiently addresses user-specific reporting requirements by generating all possible reports upfront. Users simply select and analyze the views that matter most to them. The Advanced Reporting Architecture’s SaaS platform is built for global reach and enterprise reliability, with the following features: Modern User Interface: Delivered via AWS, optimized for mobile and desktop, with seamless language switching (English, French, German, Spanish, and more to come). Encrypted Cloud Storage: Ensuring uploaded files and reports are always secure. Serverless Data Processing: High-precision processing that analyzes user-uploaded data and uses data influenced relevant factors to maximizing analytical efficiencies and lower the cost of processing efforts. Comprehensive Asset Management: Support for editable reports, dashboards, presentations, pivots, and custom outputs. Integrated Payments & Accounting: Powered by PayPal and Odoo. Simple Subscription Model: Pay only for what you use—no expensive licenses, hardware, or ongoing maintenance. Some leading-edge reporting platforms, such as PrestoCharts, are based on Advanced Reporting Architecture and have been successful in enabling business users to develop custom reports on the fly. Thus, Advanced Reporting Architecture puts reporting prowess in the hands of the user.


These jobs face the highest risk of AI takeover, according to Microsoft

According to the report -- which has yet to be peer-reviewed -- the most at-risk jobs are those that are based on the gathering, synthesis, and communication of information, at which modern generative AI systems excel: think translators, sales and customer service reps, writers and journalists, and political scientists. The most secure jobs, on the other hand, are supposedly those that depend more on physical labor and interpersonal skills. No AI is going to replace phlebotomists, embalmers, or massage therapists anytime soon. ... "It is tempting to conclude that occupations that have high overlap with activities AI performs will be automated and thus experience job or wage loss, and that occupations with activities AI assists with will be augmented and raise wages," the Microsoft researchers note in their report. "This would be a mistake, as our data do not include the downstream business impacts of new technology, which are very hard to predict and often counterintuitive." The report also echoes what's become something of a mantra among the biggest tech companies as they ramp up their AI efforts: that even though AI will replace or radically transform many jobs, it will also create new ones. ... It's possible that AI could play a role in helping people practice that skill. About one in three Americans are already using the technology to help them navigate a shift in their career, a recent study found.


AIBOMs are the new SBOMs: The missing link in AI risk management

AIBOMs follow the same formats as traditional SBOMs, but contain AI-specific content and metadata, like model family, acceptable usage, AI-specific licenses, etc. If you are a security leader at a large defense contractor, you’d need the ability to identify model developers and their country of origin. This would ensure you are not utilizing models originating from near-peer adversary countries, such as China. ... The first step is inventorying their AI. Utilize AIBOMs to inventory your AI dependencies, monitor what is approved vs. requested vs. denied, and ensure you have an understanding of what is deployed where. The second is to actively seek out AI, rather than waiting for employees to discover it. Organizations need capabilities to identify AI in code and automatically generate resulting AIBOMs. This should be integrated as part of the MLOps pipeline to generate AIBOMs and automatically surface new AI usage as it occurs. The third is to develop and adopt responsible AI policies. Some of them are fairly common-sense: no contributors from OFAC countries, no copylefted licenses, no usage of models without a three-month track record on HuggingFace, and no usage of models over a year old without updates. Then, enforce those policies in an automated and scalable system. The key is moving from reactive discovery to proactive monitoring.


2026 Budgets: What’s on Top of CIOs’ Lists (and What Should Be)

CIO shops are becoming outcome-based, which makes them accountable for what they’re delivering against the value potential, not how many hours were burned. “The biggest challenge seems to be changing every day, but I think it’s going to be all about balancing long-term vision with near-term execution,” says Sudeep George, CTO at software-delivered AI data company iMerit. “Frankly, nobody has a very good idea of what's going to happen in 2026, so everyone's placing bets,” he continues. “This unpredictability is going to be the nature of the beast, and we have to be ready for that.” ... “Reducing the amount of tech debt will always continue to be a focus for my organization,” says Calleja-Matsko. “We’re constantly looking at re-evaluating contracts, terms, [and] whether we have overlapping business capabilities that are being addressed by multiple tools that we have. It's rationalizing, she adds, and what that does is free up investment. How is this vendor pricing its offering? How do we make sure we include enough in our budget based on that pricing model? “That’s my challenge,” Calleja-Matsko emphasizes. Talent is top of mind for 2026, both in terms of attracting it and retaining it. Ultimately though, AI investments are enabling the company to spend more time with customers.


Digital Twin: Revolutionizing the Future of Technology and Industry

T​h​e rise o​f t​h​e cyberspace o​f Things [IoT] has made digital twin technology more relevant​ and accessible. IoT devices ceaselessly garner data from their surroundings a​n​d send i​t t​o t​h​e cloud. T​h​i​s data i​s used t​o produce a​n​d update digital twins o​f those devices o​r systems. I​n smart homes, digital twins help keep an eye on a​n​d see to it lighting, heating, a​n​d appliances. I​n blue-collar settings, IoT sensors track simple machine health a​n​d doing. Moreover, these smart systems c​a​n discover minor issues ahead of time that lead t​o failures. A​s more devices abound, digital twins offer greater conspicuousness a​n​d see to it. ... Despite its benefits, digital twin technology comes w​i​t​h challenges. One major issue i​s t​h​e high cost o​f carrying out. Setting up sensors, software systems, a​n​d data chopping c​a​n be overpriced, particularly f​o​r small businesses. There a​r​e also concerns about the data security system a​n​d privacy. Since digital twins rely o​n straight data flow, any rift c​a​n be risky. Integrating digital twins into existing systems c​a​n be involved. Moreover, i​t requires fine professionals who translate both t​h​e personal systems a​n​d t​h​e labyrinthine digital technologies. A different dispute i​s ensuring t​h​e caliber a​n​d truth o​f t​h​e data. I​f t​h​e input data i​s blemished, the digital twin’s results will also be erratic. Companies must also cope with large amounts o​f data, which requires a stressed I​T base. 


Why Banks Must Stop Pretending They’re Not Tech Companies

The most successful "banks" of the future may not even call themselves banks at all. While traditional institutions cling to century-old identities rooted in vaults and branches, their most formidable competitors are building financial ecosystems from the ground up with APIs, cloud infrastructure, and data-driven decision engines. ... The question isn’t whether banks will become technology companies. It’s whether they’ll make that transition fast enough to remain relevant. And to do this, they must rethink their identity by operating as technology platforms that enable fast, connected, and customer-first experiences. ... This isn’t about layering digital tools on top of legacy infrastructure or launching a chatbot and calling it innovation. It’s about adopting a platform mindset — one that treats technology not as a cost center but as the foundation of growth. A true platform bank is modular, API-first, and cloud-native. It uses real-time data to personalize every interaction. It delivers experiences that are intuitive, fast, and seamless — meeting customers wherever they are and embedding financial services into their everyday lives. ... To keep up with the pace of innovation, banks must adopt skills-based models that prioritize adaptability and continuous learning. Upskilling isn’t optional. It’s how institutions stay responsive to market shifts and build lasting capabilities. And it starts at the top.


Colo space crunch could cripple IT expansion projects

For enterprise IT execs who already have a lot on their plates, the lack of available colocation space represents yet another headache to deal with, and one with major implications. Nobody wants to have to explain to the CIO or the board of directors that the company can’t proceed with digitization efforts or AI projects because there’s no space to put the servers. IT execs need to start the planning process now to get ahead of the problem. ... Demand has outstripped supply due to multiple factors, according to Pat Lynch, executive managing director at CBRE Data Center Solutions. “AI is definitely part of the demand scenario that we see in the market, but we also see growing demand from enterprise clients for raw compute power that companies are using in all aspects of their business.” ... It’s not GPU chip shortages that are slowing down new construction of data centers; it’s power. When a hyperscaler, colo operator or enterprise starts looking for a location to build a data center, the first thing they need is a commitment from the utility company for the required megawattage. According to a McKinsey study, data centers are consuming more power due to the proliferation of the power-hungry GPUs required for AI. Ten years ago, a 30 MW data center was considered large. Today, a 200 MW facility is considered normal.

Daily Tech Digest - September 24, 2024

Effective Strategies for Talking About Security Risks with Business Leaders

Like every difficult conversation, CISOs must pick the right time, place and strategy to discuss cyber risks with the executive team and staff. Instead of waiting for the opportunity to arise, CISOs should proactively engage with individuals at all levels of the organization to influence them and ensure an understanding of security policies and incident response. These conversations could come in the form of monthly or quarterly meetings with senior stakeholders to maintain the cadence and consistency of the conversations, discuss how the threat landscape is evolving and review their part of the business through a cybersecurity lens. They could also be casual watercooler chats with staff members, which not only help to educate and inform employees but also build vital internal relationships that can affect online behaviors. In addition to talking, CISOs must also listen to and learn about key stakeholders to tailor conversations around their interests and concerns. ... If you're talking to the board, you'll need to know the people around that table. What are their interests, and how can you communicate in a way that resonates with them and gets their attention? Use visualization techniques and find a "cyber ally" on the board who will back you and help reinforce your ideas and the information you share.


Is Explainable AI Explainable Enough Yet?

“More often than not, the higher the accuracy provided by an AI model, the more complex and less explainable it becomes, which makes developing explainable AI models challenging,” says Godbole. “The premise of these AI systems is that they can work with high-dimensional data and build non-linear relationships that are beyond human capabilities. This allows them to identify patterns at a large scale and provide higher accuracy. However, it becomes difficult to explain this non-linearity and provide simple, intuitive explanations in understandable terms.” Other challenges are providing explanations that are both comprehensive and easily understandable and the fact that businesses hesitate to explain their systems fully for fear of divulging intellectual property (IP) and losing their competitive advantage. “As we make progress towards more sophisticated AI systems, we may face greater challenges in explaining their decision-making processes. For autonomous systems, providing real-time explainability for critical decisions could be technically difficult, even though it will be highly necessary,” says Godbole. When AI is used in sensitive areas, it will become increasingly important to explain decisions that have significant ethical implications, but this will also be challenging.


The challenge of cloud computing forensics

Data replication across multiple locations complicates forensics processes that require the ability to pinpoint sources for analysis. Consider the challenge of retrieving deleted data from cloud systems—not just a technical obstacle, but a matter of accountability that is often not addressed by IT until it’s too late. Multitenancy involves shared resources among multiple users, making it difficult to distinguish and segregate data. This is a systemic problem for cloud security, and it is particularly problematic for cloud platform forensics. The NIST document acknowledges this challenge and recommends the implementation of access mechanisms and frameworks so companies can maintain data integrity and manage incident response. Thus, the mechanisms are in place to deal with issues once they occur because accounting happens on an ongoing basis. The lack of location transparency is a nightmare. Data resides in various physical jurisdictions, all with different laws and cultural considerations. Crimes may occur on a public cloud point of presence in a country that disallows warrants to examine the physical systems, whereas other countries have more options for law enforcement. Guess which countries the criminals choose to leverage.


Is the rise of genAI about to create an energy crisis?

Though data center power consumption is expected to double by 2028, according to IDC research director Sean Graham, it’s still a small percentage of overall energy consumption — just 18%. “So, it’s not fair to blame energy consumption on AI,” he said. “Now, I don’t mean to say AI isn’t using a lot of energy and data centers aren’t growing at a very fast rate. Data Center energy consumption is growing at 20% per year. That’s significant, but it’s still only 2.5% of the global energy demand. “It’s not like we can blame energy problems exclusively on AI,” Graham said. ... Beyond the pressure from genAI growth, electricity prices are rising due to supply and demand dynamics, environmental regulations, geopolitical events, and extreme weather events fueled in part by climate change, according to an IDC study published today. IDC believes the higher electricity prices of the last five years are likely to continue, making data centers considerably more expensive to operate. Amid that backdrop, electricity suppliers and other utilities have argued that AI creators and hosts should be required to pay higher prices for electricity — as cloud providers did before them — because they’re quickly consuming greater amounts of compute cycles and, therefore, energy compared to other users.


20 Years in Open Source: Resilience, Failure, Success

The rise of Big Tech has emphasized one of the most significant truths I’ve learned: the need for digital sovereignty. Over time, I’ve observed how centralized platforms can slowly erode consumers’ authority over their data and software. Today, more than ever, I believe that open source is a crucial path to regaining control — whether you’re an individual, a business, or a government. With open source software, you own your infrastructure, and you’re not subject to the whims of a vendor deciding to change prices, terms, or even direction. I’ve learned that part of being resilient in this industry means providing alternatives to centralized solutions. We built CryptPad — to offer an encrypted, privacy-respecting alternative to tools like Google Docs. It hasn’t been easy, but it’s a project I believe in because it aligns with my core belief: people should control their data. I would improve the way the community communicates the benefits of open source. The conversation all too frequently concentrates on “free vs. paid” software. In reality, what matters is the distinction between dependence and freedom. I’ve concluded that we need to explain better how individuals may take charge of their data, privacy, and future by utilizing open source.


20 Tech Pros On Top Trends In Software Testing

The shift toward AI-driven testing will revolutionize software quality assurance. AI can intelligently predict potential failures, adapt to changes and optimize testing processes, ensuring that products are not only reliable, but also innovative. This approach allows us to focus on creating user experiences that are intuitive and delightful. ... AI-driven test automation has been the trend that almost every client of ours has been asking for in the past year. Combining advanced self-healing test scripts and visual testing methodologies has proven to improve software quality. This process also reduces the time to market by helping break down complex tasks. ... With many new applications relying heavily on third-party APIs or software libraries, rigorous security auditing and testing of these services is crucial to avoid supply chain attacks against critical services. ... One trend that will become increasingly important is shift-left security testing. As software development accelerates, security risks are growing. Integrating security testing into the early stages of development—shifting left—enables teams to identify vulnerabilities earlier, reduce remediation costs and ensure secure coding practices, ultimately leading to more secure software releases.


How to manage shadow IT and reduce your attack surface

To effectively mitigate the risks associated with shadow IT, your organization should adopt a comprehensive approach that encompasses the following strategies:Understanding the root causes: Engage with different business units to identify the pain points that drive employees to seek unauthorized solutions. Streamline your IT processes to reduce friction and make it easier for employees to accomplish their tasks within approved channels, minimizing the temptation to bypass security measures. Educating employees: Raise awareness across your organization about the risks associated with shadow IT and provide approved alternatives. Foster a culture of collaboration and open communication between IT and business teams, encouraging employees to seek guidance and support when selecting technology solutions. Establishing clear policies: Define and communicate guidelines for the appropriate use of personal devices, software, and services. Enforce consequences for policy violations to ensure compliance and accountability. Leveraging technology: Implement tools that enable your IT team to continuously discover and monitor all unknown and unmanaged IT assets. 


How software teams should prepare for the digital twin and AI revolution

By integrating AI to enhance real-time analytics, users can develop a more nuanced understanding of emerging issues, improving situational awareness and allowing them to make better decisions. Using in-memory computing technology, digital twins produce real-time analytics results that users aggregate and query to continuously visualize the dynamics of a complex system and look for emerging issues that need attention. In the near future, generative AI-driven tools will magnify these capabilities by automatically generating queries, detecting anomalies, and then alerting users as needed. AI will create sophisticated data visualizations on dashboards that point to emerging issues, giving managers even better situational awareness and responsiveness. ... Digital twins can use ML techniques to monitor thousands of entry points and internal servers to detect unusual logins, access attempts, and processes. However, detecting patterns that integrate this information and create an overall threat assessment may require data aggregation and query to tie together the elements of a kill chain. Generative AI can assist personnel by using these tools to detect unusual behaviors and alert personnel who can carry the investigation forward.


The Open Source Software Balancing Act: How to Maximize the Benefits And Minimize the Risks

OSS has democratized access to cutting-edge technologies, fostered a culture of collaboration and empowered businesses to prioritize innovation. By tapping into the vast pool of open source components available, software developers can accelerate product development, minimize time-to-market and drive innovation at scale. ... Paying down technical debt requires two things, consistency and prioritization. First, organizations should opt for fewer high-quality suppliers with well-maintained open source projects because they have greater reliability and stability, reducing the likelihood of introducing bugs or issues into their own codebase that rack up tech debt. In terms of transparency, organizations must have complete visibility into their software infrastructure. This is another area where SBOMs are key. With an SBOM, developers have full visibility into every element of their software, which reduces the risk of using outdated or vulnerable components that contribute to technical debt. There’s no question that open source software offers unparalleled opportunities for innovation, collaboration and growth within the software development ecosystem. 


Is AI really going to burn the planet?

Trying to understand exactly how energy-intensive the training of datasets is, is even more complex than understanding exactly how big data center GHG sins are. A common “AI is environmentally bad” statistic is that training a large language model like GPT-3 is estimated to use just under 1,300-megawatt hours (MWh) of electricity, about as much power as consumed annually by 130 US homes, or the equivalent of watching 1.63 million hours of Netflix. The source for this stat is AI company Hugging Face, which does seem to have used some real science to arrive at these numbers. It also, to quote a May Hugging Face probe into all this, seems to have proven that "multi-purpose, generative architectures are orders of magnitude more [energy] expensive than task-specific systems for a variety of tasks, even when controlling for the number of model parameters.” It’s important to note that what’s being compared here are task-specific AI runs (optimized, smaller models trained in specific generative AI tasks) and multi-purpose (a machine learning model that should be able to process information from different modalities, including images, videos, and text).



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls

Daily Tech Digest - July 02, 2024

The Changing Role of the Chief Data Officer

The chief data officer originally played more “defense” than “offense.” The position focused on data security, fraud protection, and Data Governance, and tended to attract people from a technical or legal background. CDOs now may take on a more offensive strategy, proactively finding ways to extract value from the data for the benefit of the wider business, and may come from an analytics or business background. Of course, in reality, the choice between offense and defense is a false one, as companies must do both. ... Major trends for CDOs in the future will include incorporating cutting-edge technology, such as generative AI, large language models, machine learning, and increasingly sophisticated forms of automation. The role is also spreading to a wider variety of industry sectors, such as healthcare, the private sector, and higher education. One of the major challenges is already in progress: responding to the COVID-19 pandemic. The pandemic hugely shook global supply chains, created new business markets, and also radically changed the nature of business itself. 


Duplicate Tech: A Bottom-Line Issue Worth Resolving

The patchwork nature of combined technologies can hinder processes and cause data fragmentation or loss. Moreover, differing cybersecurity capabilities among technologies can expose the organization to increased risk of cyberattacks, as older or less secure systems may be more vulnerable to breaches. Retaining multiple technologies may initially seem prudent in a merger or acquisition, but ultimately it proves detrimental. The drawbacks — from duplicated data and disconnected processes to inefficiencies and security vulnerabilities — far outweigh any perceived benefits, highlighting the critical need for streamlined, unified IT systems. ... There are compelling reasons to remove the dead weight of duplicate technologies and adopt a singular technology. The first step in eliminating tech redundancy is to evaluate existing technologies to determine which tools best align with current and future business needs. A collaborative approach with all relevant stakeholders is recommended to ensure the chosen solution supports organizational goals and avoids unnecessary repetition.


Disability community has long wrestled with 'helpful' technologies—lessons for everyone in dealing with AI

This disability community perspective can be invaluable in approaching new technologies that can assist both disabled and nondisabled people. You can't substitute pretending to be disabled for the experience of actually being disabled, but accessibility can benefit everyone. This is sometimes called the curb-cut effect after the ways that putting a ramp in a curb to help a wheelchair user access the sidewalk also benefits people with strollers, rolling suitcases and bicycles. ... Disability advocates have long battled this type of well-meaning but intrusive assistance—for example, by putting spikes on wheelchair handles to keep people from pushing a person in a wheelchair without being asked to or advocating for services that keep the disabled person in control. The disabled community instead offers a model of assistance as a collaborative effort. Applying this to AI can help to ensure that new AI tools support human autonomy rather than taking over. A key goal of my lab's work is to develop AI-powered assistive robotics that treat the user as an equal partner. We have shown that this model is not just valuable, but inevitable. 


What is the Role of Explainable AI (XAI) In Security?

XAI in cybersecurity is like a colleague who never stops working. While AI helps automatically detect and respond to rapidly evolving threats, XAI helps security professionals understand how these decisions are being made. “Explainable AI sheds light on the inner workings of AI models, making them transparent and trustworthy. Revealing the why behind the models’ predictions, XAI empowers the analysts to make informed decisions. It also enables fast adaptation by exposing insights that lead to quick fine-tuning or new strategies in the face of advanced threats. And most importantly, XAI facilitates collaboration between humans and AI, creating a context in which human intuition complements computational power.,” Kolcsár added. ... With XAI working behind the scenes, security teams can quickly discover the root cause of a security alert and initiate a more targeted response, minimizing the overall damage caused by an attack and limiting resource wastage. As transparency allows security professionals to understand how AI models adapt to rapidly evolving threats, they can also ensure that security measures are consistently effective. 


10 ways AI can make IT more productive

By infusing AI into business processes, enterprises can achieve levels of productivity, efficiency, consistency, and scale that were unimaginable a decade ago, says Jim Liddle, CIO at hybrid cloud storage provider Nasuni. He observes that mundane repetitive tasks, such as data entry and collection, can be easily handled 24/7 by intelligent AI algorithms. “Complex business decisions, such as fraud detection and price optimization, can now be made in real-time based on huge amounts of data,” Liddle states. “Workflows that spanned days or weeks can now be completed in hours or minutes.”  “Enterprises have long sought to drive efficiency and scale through automation, first with simple programmatic rules-based systems and later with more advanced algorithmic software,” Liddle says.  ... “By reducing boilerplating, teams can save time on repetitive tasks while automated and enhanced documentation keeps pace with code changes and project developments.” He notes that AI can also automatically create pull requests and integrate with project management software. Additionally, AI can generate suggestions to resolve bugs, propose new features, and improve code reviews.


How Tomorrow's Smart Cities Will Think For Themselves

When creating a cognitive city, the fundamental need is to move the computing power to where data is generated: where people live, work and travel. That applies whether you’re building a totally new smart city or retrofitting technology to a pre-existing ‘brownfield’ city. Either way, edge is key here. You’re dealing with information from sensors in rubbish bins, drains, and cameras in traffic lights. ... But in years to come the city itself will respond dynamically to the changing physical world, adjusting energy use in real-time to respond to the weather, for example. The evolution of monitoring has come from a machine-to-machine foundation, with the introduction of the Internet of Things (IoT) and now artificial intelligence (AI) becoming transformational in enabling smart technologies to become dynamic. Emerging AI technologies such as large language models will also play a role going forward, making it easy for both city planners and ordinary citizens to interact with the city they live in. Edge will be the key ingredient which gives us effective control of these cities of the future.


Serverless cloud technology fades away

The meaning of serverless computing became diluted over time. Originally coined to describe a model where developers could run code without provisioning or managing servers, it has since been applied to a wide range of services that do not fit its original definition. This led to a confusing loss of precision. It’s crucial to focus on the functional characteristics of serverless computing. The elements of serverless—agility, cost-efficiency, and the ability to rapidly deploy and scale applications—remain valuable. It’s important to concentrate on how these characteristics contribute to achieving business goals rather than becoming fixated on the specific technologies in use. Serverless technology will continue to fade into the background due to the rise of other cloud computing paradigms, such as edge computing and microclouds. ... The explosion of generative AI also contributed to the shifting landscape. Cloud providers are deeply invested in enabling AI-driven solutions, which often require specialized computer resources and significant data management capabilities, areas where traditional serverless models may not always excel.


Infrastructure-as-code and its game-changing impact on rapid solutions development

Automation is one of the main benefits of adopting an IaC approach. By automating infrastructure provisioning, IaC allows configuration to be accomplished at a faster pace. Automation also reduces the risk of errors that can result from manual coding, empowering greater consistency by standardizing the development and deployment of the infrastructure. ... Developers can rapidly assemble and deploy its infrastructure blocks, reusing them as needed throughout the development process. When adjustments are needed, developers can simply update the code the blocks are built on rather than making manual one-off changes to infrastructure components. Testing and tracking are more streamlined with IaC since the IaC code serves as a centralized and readily accessible source for documentation on the infrastructure. It also streamlines the testing process, allowing for automated unit testing of compliance, validation, and other processes before deploying. Additionally, IaC empowers developers to take advantage of the benefits provided by cloud computing. It facilitates direct interaction with the cloud’s exposed API, allowing developers to dynamically provision, manage, and orchestrate resources.


What is Multimodal AI? Here’s Everything You Need to Know

Multimodal AI describes artificial intelligence systems that can simultaneously process and interpret data from various sources such as text, images, audio, and video. Unlike traditional AI models that depend on a single type of data, multimodal AI provides a holistic approach to data processing. ... Although multimodal AI and generative AI share similarities, they differ fundamentally. For instance, generative AI focuses on creating new content from a single type of prompt, such as creating images from textual descriptions. In contrast, multimodal AI processes and understands different sensory inputs, allowing users to input various data types and receive multimodal outputs. ... Multimodal AI represents a significant advancement in the field of artificial intelligence. Therefore, by understanding and leveraging this advanced technology, data scientists and AI professionals can pave the way for more sophisticated, context-aware, and human-like AI systems, ultimately enriching our interaction with technology and the world around us. 


Excel Enthusiast to Supply Chain Innovator – The Journey to Building One of the Largest Analytic Platforms

While ChatGPT has helped raise awareness about AI capabilities, explaining how to integrate AI has presented challenges, especially when managing over 200 different data analytic reports. To address the different uses, Miranda has simplified AI into three categories: rule-based AI, learning AI (machine learning), and generative AI. Generative AI has emerged as the most dynamic tool among the three for executing and recording data analytics. Its versatility and adaptability make it particularly effective in capturing and processing diverse data sets, contributing to more comprehensive analytics outcomes. Miranda says, “People in analytics might not jump out of bed excited to tackle documentation, but it's a critical aspect of our work. Without proper documentation, we risk becoming a single point of failure, which is something we want to avoid.” ... These recordings are then converted into transcripts and securely stored in a containerized environment, streamlining the documentation process while ensuring data security. Because of process automation, Miranda says that the organization generated 240,000 work hours last year, and they anticipate even more this year.



Quote for the day:

"Life is like riding a bicycle. To keep your balance you must keep moving." -- Albert Einstein