Showing posts with label quantum networks. Show all posts
Showing posts with label quantum networks. Show all posts

Daily Tech Digest - May 07, 2025


Quote for the day:

"Integrity is the soul of leadership! Trust is the engine of leadership!" -- Amine A. Ayad


Real-world use cases for agentic AI

There’s a wealth of public code bases on which models can be trained. And larger companies typically have their own code repositories, with detailed change logs, bug fixes, and other information that can be used to train or fine-tune an AI system on a company’s internal coding methods. As AI model context windows get larger, these tools can look through more and more code at once to identify problems or suggest fixes. And the usefulness of AI coding tools is only increasing as developers adopt agentic AI. According to Gartner, AI agents enable developers to fully automate and offload more tasks, transforming how software development is done — a change that will force 80% of the engineering workforce to upskill by 2027. Today, there are several very popular agentic AI systems and coding assistants built right into integrated development environments, as well as several startups trying to break into the market with an AI focus out of the gate. ... Not every use case requires a full agentic system, he notes. For example, the company uses ChatGPT and reasoning models for architecture and design. “I’m consistently impressed by these models,” Shiebler says. For software development, however, using ChatGPT or Claude and cutting-and-pasting the code is an inefficient option, he says.


Rethinking AppSec: How DevOps, containers, and serverless are changing the rules

Application security and developers have not always been on friendly terms, but the practice indicates that innovative security solutions are bridging the gaps, bringing developers and security closer together in a seamless fashion, with security no longer being a hurdle in developers’ daily work. Quite the contrary – security is nested in CI/CD pipelines, it’s accessible, non-obstructive, and it’s gone beyond scanning for waves and waves of false-positive vulnerabilities. It’s become, and is poised to remain, about empowering developers to fix issues early, in context, and without affecting delivery and its velocity. ... Another considerate battleground is identity. With reliance on distributed microservices, each component acts as both client and server, so misconfigured identity providers or weak token validation logic make room for lateral movement and exponentially increased attack opportunities. Without naming names, there are sufficient amounts of cases illustrating how breaches can occur from token forgery or authorization header manipulations. Additional headaches are exposed APIs and shadow services. Developers create new endpoints, and due to the fast pace of the process, they can easily escape scrutiny, further emphasizing the importance of continuous discovery and dynamic testing that will “catch” those endpoints and ensure they’re covered in securing the development process.


The Hidden Cost of Complexity: Managing Technical Debt Without Losing Momentum

Outdated, fragmented, or overly complex systems become the digital equivalent of cognitive noise. They consume bandwidth, blur clarity, and slow down both decision-making and delivery. What should be a smooth flow from idea to outcome becomes a slog. ... In short, technical debt introduces a constant low-grade drag on agility. It limits responsiveness. It multiplies cost. And like visual clutter, it contributes to fatigue—especially for architects, engineers, and teams tasked with keeping transformation moving. So what can we do?Assess System Health: Inventory your landscape and identify outdated systems, high-maintenance assets, and unnecessary complexity. Use KPIs like total cost of ownership, incident rates, and integration overhead. Prioritize for Renewal or Retirement: Not everything needs to be modernized. Some systems need replacement. Others, thoughtful containment. The key is intentionality. ... Technical debt is a measure of how much operational risk and complexity is lurking beneath the surface. It’s not just code that’s held together by duct tape or documentation gaps—it’s how those issues accumulate and impact business outcomes. But not all technical debt is created equal. In fact, some debt is strategic. It enables agility, unlocks short-term wins, and helps organizations experiment quickly. 


The Cost Conundrum of Cloud Computing

When exploring cloud pricing structures, the initial costs may seem quite attractive but after delving deeper to examine the details, certain aspects may become cloudy. The pricing tiers add a layer of complexity which means there isn’t a single recurring cost to add to the balance sheet. Rather, cloud fees vary depending on the provider, features, and several usage factors such as on-demand use, data transfer volumes, technical support, bandwidth, disk performance, and other core metrics, which can influence the overall solution’s price. However, the good news is there are ways to gain control of and manage these costs. ... Whilst understanding the costs associated with using a public cloud solution is critical, it is important to emphasise that modern cloud platforms provide robust, comprehensive and cutting-edge technologies and solutions to help drive businesses forward. Cloud platforms provide a strong foundation of physical infrastructure, robust platform-level services, and a wide array of resilient connectivity and data solutions. In addition, cloud providers continually invest in the security of their solutions to physically and logically secure the hardware and software layers with access control, monitoring tools, and stringent data security measures to keep the data safe.



Operating in the light, and in the dark (net)

While the takedown of sites hosting CSA cannot be directly described in the same light, the issue is ramping up. The Internet continues to expand - like the universe - and attempting to monitor it is a never-ending challenge. As IWF’s Sexton puts it: “Right now, the Internet is so big that its sort of anonymity with obscurity.” While some emerging (and already emerged) technologies such as AI can play a role in assisting those working on the side of the light - for example, the IWF has tested using AI for triage when assessing websites with thousands of images, and AI can be trained for content moderation by industry and others, the proliferation of AI has also added to the problem.AI-generated content has now also entered the scene. From a legality standpoint, it remains the same as CSA content. Just because an AI created it, does not mean that it’s permitted - at least in the UK where IWF primarily operates. “The legislation in the UK is robust enough to cover both real material, photo-realistic synthetic content, or sheerly synthetic content. The problem it does create is one of quantity. Previously, to create CSA, it would require someone to have access to a child and conduct abuse. “Then with the rise of the Internet we also saw an increase in self-generated content. Now, AI has the ability to create it without any contact with a child at all. People now have effectively an infinite ability to generate this content.”


Why LLM applications need better memory management

Developers assume generative AI-powered tools are improving dynamically—learning from mistakes, refining their knowledge, adapting. But that’s not how it works. Large language models (LLMs) are stateless by design. Each request is processed in isolation unless an external system supplies prior context. That means “memory” isn’t actually built into the model—it’s layered on top, often imperfectly. ... Some LLM applications have the opposite problem—not forgetting too much, but remembering the wrong things. Have you ever told ChatGPT to “ignore that last part,” only for it to bring it up later anyway? That’s what I call “traumatic memory”—when an LLM stubbornly holds onto outdated or irrelevant details, actively degrading its usefulness. ... To build better LLM memory, applications need: Contextual working memory: Actively managed session context with message summarization and selective recall to prevent token overflow. Persistent memory systems: Long-term storage that retrieves based on relevance, not raw transcripts. Many teams use vector-based search (e.g., semantic similarity on past messages), but relevance filtering is still weak. Attentional memory controls: A system that prioritizes useful information while fading outdated details. Without this, models will either cling to old data or forget essential corrections.


DARPA’s Quantum Benchmarking Initiative: A Make-or-Break for Quantum Computing

While the hype around quantum computing is certainly warranted, it is often blown out of proportion. This arises occasionally due to a lack of fundamental understanding of the field. However, more often, this is a consequence of corporations obfuscating or misrepresenting facts to influence the stock market and raise capital. ... If it becomes practically applicable, quantum computing will bring a seismic shift in society, completely transforming areas such as medicine, finance, agriculture, energy, and the military, to name a few. Nonetheless, this enormous potential has resulted in rampant hype around it, while concomitantly resulting in the proliferation of bad actors seeking to take advantage of a technology not necessarily well understood by the general public. On the other hand, negativity around the technology can also cause the pendulum to swing in the other direction. ... Quantum computing is at a critical juncture. Whether it reaches its promised potential or disappears into the annals of history, much like its many preceding technologies, will be decided in the coming years. As such, a transparent and sincere approach in quantum computing research leading to practically useful applications will inspire confidence among the masses, while false and half-baked claims will deter investments in the field, eventually leading to its inevitable demise.


The reality check every CIO needs before seeking a board seat

“CIOs think technology will get them to the boardroom,” says Shurts, who has served on multiple public- and private-company boards. “Yes, more boards want tech expertise, but you have to provide the right knowledge, breadth, and depth on topics that matter to their businesses.” ... Herein lies another conundrum for CIOs seeking spots on boards. Many see those findings and think they can help with that. But the context is more important. “In your operational role as a CIO, you’re very much involved in the details, solving problems every day,” Zarmi says. “On the board, you don’t solve the problems. You help, coach, mentor, ask questions, make suggestions, and impart wisdom, but you’re not responsible for execution.” That’s another change IT leaders need to make to position themselves for board seats. Luckily, there are tools that can help them make the leap. Quinlan, for example, got a certification from the National Association of Corporate Directors (NACD), which offers a variety of resources for aspiring board members. And he took it a few steps further by attaining a financial certification. Sure, he’d been involved in P&L management, but the certification helped him understand finance at the board’s altitude. He also added a cybersecurity certification even though he runs multi-hundred-million-dollar cyber programs. “Right, but I haven’t run it at the board, and I wanted to do that,” he says.


Applying the OODA Loop to Solve the Shadow AI Problem

Organizations should have complete visibility of their AI model inventory. Inconsistent network visibility arising from siloed networks, a lack of communication between security and IT teams, and point solutions encourages shadow AI. Complete network visibility must therefore become the priority for organizations to clearly see the extent and nature of shadow AI in their systems, thus promoting compliance, reducing risk, and promoting responsible AI use without hindering innovation. ... Organizations need to identify the effect of shadow AI once it has been discovered. This includes identifying the risks and advantages of such shadow software. ... Organizations must set clearly defined yet flexible policies regarding the acceptable use of AI to enable employees to use AI responsibly. Such policies need to allow granular control from binary approval to more sophisticated levels like providing access based on users’ role and responsibility, limiting or enabling certain functionalities within an AI tool, or specifying data-level approvals where sensitive data can be processed only in approved environments. ... Organizations must evaluate and formally incorporate shadow AI tools offering substantial value to ensure their use in secure and compliant environments. Access controls need to be tightened to avoid unapproved installations; zero trust and privilege management policies can assist in this regard. 


Cisco Pulls Together A Quantum Network Architecture

It will take a quantum network infrastructure to tie create a distributed quantum computing environment possible and allow it to scale more quickly beyond the relatively small number of qubits that are found in current and near-future systems, Cisco scientists wrote in a research paper. Such quantum datacenters involve “multiple QPUs [quantum processing units] … networked together, enabling a distributed architecture that can scale to meet the demands of large-scale quantum computing,” they wrote. “Ultimately, these quantum data centers will form the backbone of a global quantum network, or quantum internet, facilitating seamless interconnectivity on a planetary scale.” ... The entanglement chip will be central to an entire quantum datacenter the vendor is working toward, with new versions of what is found in current classical networks, including switches and NICs. “A quantum network requires fundamentally new components that work at the quantum mechanics level,” they wrote. “When building a quantum network, we can’t digitize information as in classical networks – we must preserve quantum properties throughout the entire transmission path. This requires specialized hardware, software, and protocols unlike anything in classical networking.” 

Daily Tech Digest - February 05, 2025


Quote for the day:

"You may only succeed if you desire succeeding; you may only fail if you do not mind failing." --Philippos


Neural Networks – Intuitively and Exhaustively Explained

The process of thinking within the human brain is the result of communication between neurons. You might receive stimulus in the form of something you saw, then that information is propagated to neurons in the brain via electrochemical signals. The first neurons in the brain receive that stimulus, then each neuron may choose whether or not to "fire" based on how much stimulus it received. "Firing", in this case, is a neurons decision to send signals to the neurons it’s connected to. ... Neural networks are, essentially, a mathematically convenient and simplified version of neurons within the brain. A neural network is made up of elements called "perceptrons", which are directly inspired by neurons. ... In AI there are many popular activation functions, but the industry has largely converged on three popular ones: ReLU, Sigmoid, and Softmax, which are used in a variety of different applications. Out of all of them, ReLU is the most common due to its simplicity and ability to generalize to mimic almost any other function. ... One of the fundamental ideas of AI is that you can "train" a model. This is done by asking a neural network (which starts its life as a big pile of random data) to do some task. Then, you somehow update the model based on how the model’s output compares to a known good answer.


Why honeypots deserve a spot in your cybersecurity arsenal

In addition to providing critical threat intelligence for defenders, honeypots can often serve as helpful deception techniques to ensure attackers focus on decoys instead of valuable and critical organizational data and systems. Once malicious activity is identified, defenders can use the findings from the honeypots to look for indicators of compromise (IoC) in other areas of their systems and environments, potentially catching further malicious activity and minimizing the dwell time of attackers. In addition to threat intelligence and attack detection value, honeytokens often have the benefit of having minimal false positives, given they are highly customized decoy resources deployed with the intent of not being accessed. This contrasts with broader security tooling, which often suffers from high rates of false positives from low-fidelity alerts and findings that burden security teams and developers. ... Enterprises need to put some thought into the placement of the honeypots. It is common for them to be used in environments and systems that may be potentially easier for attackers to access, such as publicly exposed endpoints and systems that are internet accessible, as well as internal network environments and systems. The former, of course, is likely to get more interaction and provide broader generic insights. 


IoT Technology: Emerging Trends Impacting Industry And Consumers

An emerging IoT trend is the rise of emotion-aware devices that use sensors and artificial intelligence to detect human emotions through voice, facial expressions or physiological data. For businesses, this opens doors to hyper-personalized customer experiences in industries like retail and healthcare. For consumers, it means more empathetic tech—think stress-relieving smart homes or wearables that detect and respond to anxiety. ... The increasing prevalence of IoT tech means that it is being increasingly deployed into “less connected” environments. As a result, the user experience needs to be adapted so that it’s not wholly dependent on good connectivity—instead, priorities must include how to gracefully handle data gaps and robust fallbacks with missing control instructions. ... IoT systems can now learn user preferences, optimizing everything from home automation to healthcare. For businesses, this means deeper customer engagement and loyalty; for consumers, it translates to more intuitive, seamless interactions that enhance daily life. ... While not a newly emerging trend, the Industrial Internet of Things is an area of focus for manufacturers seeking greater efficiency, productivity and safety. Connecting machines and systems with a centralized work management platform gives manufacturers access to real-time data. 


When digital literacy fails, IT gets the blame

By insisting that requisite digital skills and system education are mastered before a system cutover occurs, the CIO assumes a leadership role in the educational portion of each digital project, even though IT itself may not be doing the training. Where IT should be inserting itself is in the area of system skills training and testing before the system goes live. The dual goals of a successful digital project should be two-fold: a system that’s complete and ready to use; and a workforce that’s skilled and ready to use it. ... IT business analysts, help desk personnel, IT trainers, and technical support personnel all have people-helping and support skills that can contribute to digital education efforts throughout the company. The more support that users have, the more confidence they will gain in new digital systems and business processes — and the more successful the company’s digital initiatives will be. ... Eventually, most of the technical glitches were resolved, and doctors, patients, and support medical personnel learned how to integrate virtual visits with regular physical visits and with the medical record system. By the time the pandemic hit in 2019, telehealth visits were already well under way. These visits worked because the IT was there, the pandemic created an emergency scenario, and, most importantly, doctors, patients, and medical support personnel were already trained on using these systems to best advantage.


What you need to know about developing AI agents

“The success of AI agents requires a foundational platform to handle data integration, effective process automation, and unstructured data management,” says Rich Waldron, co-founder and CEO of Tray.ai. “AI agents can be architected to align with strict data policies and security protocols, which makes them effective for IT teams to drive productivity gains while ensuring compliance.” ... One option for AI agent development comes directly as a service from platform vendors that use your data to enable agent analysis, then provide the APIs to perform transactions. A second option is from low-code or no-code, automation, and data fabric platforms that can offer general-purpose tools for agent development. “A mix of low-code and pro-code tools will be used to build agents, but low-code will dominate since business analysts will be empowered to build their own solutions,” says David Brooks, SVP of Evangelism at Copado. “This will benefit the business through rapid iteration of agents that address critical business needs. Pro coders will use AI agents to build services and integrations that provide agency.” ... Organizations looking to be early adopters in developing AI agents will likely need to review their data management platforms, development tools, and smarter devops processes to enable developing and deploying agents at scale.


The Path of Least Resistance to Privileged Access Management

While PAM allows organizations to segment accounts, providing a barrier between the user’s standard access and needed privileged access and restricting access to information that is not needed, it also adds a layer of internal and organizational complexity. This is because of the impression it removes user’s access to files and accounts that they have typically had the right to use, and they do not always understand why. It can bring changes to their established processes. They don’t see the security benefit and often resist the approach, seeing it as an obstacle to doing their jobs and causing frustration amongst teams. As such, PAM is perceived to be difficult to introduce because of this friction. ... A significant gap in the PAM implementation process lies in the lack of comprehensive awareness among administrators. They often do not have a complete inventory of all accounts, the associated access levels, their purposes, ownership, or the extent of the security issues they face. ... Consider a scenario where a company has a privileged Windows account with access to 100 servers. If PAM is instructed to discover the scope of this Windows account, it might only identify the servers that have been accessed previously by the account, without revealing the full extent of its access or the actions performed.


Quantum networking advances on Earth and in space

“The most established use case of quantum networking to date is quantum key distribution — QKD — a technology first commercialized around 2003,” says Monga. “Since then, substantial advancements have been achieved globally in the development and production deployment of QKD, which leverages secure quantum channels to exchange encryption keys, ensuring data transfer security over conventional networks.” Quantum key distribution networks are already up and running, and are being used by companies, he says, in the U.S., in Europe, and in China. “Many commercial companies and startups now offer QKD products, providing secure quantum channels for the exchange of encryption keys, which ensures the safe transfer of data over traditional networks,” he says. Companies offering QKD include Toshiba, ID Quantique, LuxQuanta, HEQA Security, Think Quantum, and others. One enterprise already using a quantum network to secure communications is JPMorgan Chase, which is connecting two data centers with a high-speed quantum network over fiber. It also has a third quantum node set up to test next-generation quantum technologies. Meanwhile, the need for secure quantum networks is higher than ever, as quantum computers get closer to prime time.


What are the Key Challenges in Mobile App Testing?

One of the major issues in mobile app testing is the sheer variety of devices in the market. With numerous models, each having different screen sizes, pixel densities, operating system (OS) versions and hardware specifications, ensuring the app is responsive across all devices becomes a task. Testing for compatibility on every device and OS can be tiresome and expensive. While tools like emulators and cloud-based testing platforms can help, it remains essential to conduct tests on real devices to ensure accurate results. ... In addition to device fragmentation, another key challenge is the wide range of OS versions. A device may run one version of an OS while another runs on a different version, leading to inconsistencies in app performance. Just like any other software, mobile apps need to function seamlessly across multiple OS versions, including Android, iPhone Operating System (iOS) and other platforms. Furthermore, OS are updated frequently, which can cause apps to break or not function. ... Mobile app users interact with apps under various network conditions, including Wi-Fi, 4G, 5G or limited connectivity. Testing how an app performs in different network conditions is crucial to ensure it does not hang or load slowly when the connection is weak. 


Reimagining KYC to Meet Regulatory Scrutiny

Implementing AI and ML allows KYC to run in the background rather than having staff manually review information as they can, said Jennifer Pitt, senior analyst for fraud and cybersecurity with Javelin Strategy & Research. “This allows the KYC team to shift to other business areas that require more human interaction like investigations,” Pitt said. Yet use of AI and ML remains low at many banks. Currently, fraudsters and cybercriminals are using generative adversarial networks - machine learning models that create new data that mirrors a training set - to make fraud less detectable. Fraud professionals should leverage generative adversarial networks to create large datasets that closely mirror actual fraudulent behavior. This process involves using a generator to create synthetic transaction data and a discriminator to distinguish between real and synthetic data. By training these models iteratively, the generator improves its ability to produce realistic fraudulent transactions, allowing fraud professionals to simulate emerging fraud types and account takeovers, and enhance detection models’ sensitivity to these evolving threats. Instead of waiting to gather sufficient historical data from known fraudulent behaviors, GANs enable a more proactive approach, helping fraud teams quickly understand new fraud trends and patterns, Pitt said.


How Agentic AI Will Transform Banking (and Banks)

Agentic AI has two intertwined vectors. For banks, one path is internal, and focused on operational efficiency for tasks including the automation of routine data entry and compliance and regulatory checks, summaries of email and reports, and the construction of predictive models for trading and risk management to bolster insights into market dynamics, fraud and credit and liquidity risk. The other path is consumer facing, and revolves around managing customer relationships, from automated help desks staffed by chatbots to personalized investment portfolio recommendations. Both trajectories aim to improve efficiency and reduce costs. Agentic AI "could have a bigger impact on the economy and finance than the internet era," Citigroup wrote in a January 2025 report that calls the technology the "Do It For Me" Economy. ... Meanwhile, automated AI decisions could inadvertently violate laws and regulations on consumer protection, anti-money laundering or fair lending laws. Agentic AI that can instruct an agent to make a trade based on bad data or assumptions could lead to financial losses and create systemic risk within the banking system. "Human oversight is still needed to oversee inputs and review the decisioning process," Davis says. 

Daily Tech Digest - October 23, 2024

What Is Quantum Networking, and What Might It Mean for Data Centers?

Conventional networks shard data into packets and move them across wires or radio waves using long-established networking protocols, such as TCP/IP. In contrast, quantum networks move data using photons or electrons. It leverages unique aspects of quantum physics to enable powerful new features like entanglement, which effectively makes it possible to verify the source of data based on the quantum state of the data itself. ... Because quantum networking remains a theoretical and experimental domain, it's challenging to say at present exactly how quantum networks might change data centers. What does seem clear, however, is that data center operators seeking to offer full support for quantum devices will need to implement fundamentally new types of network infrastructure. They'll need to deploy infrastructure resources like quantum repeaters, while also ensuring that they can support whichever networking standards might emerge in the quantum space. The good news for the fledgling quantum data center ecosystem is that true quantum networks aren't a prerequisite for connecting quantum computers. It's possible for quantum machines themselves to send and receive data over classical networks by using traditional computers and networking devices as intermediaries.


Unmasking Big Tech’s AI Policy Playbook: A Warning to Global South Policymakers

Rather than a genuine, inclusive discussion about how governments should approach AI governance, what we are witnessing instead is a clash of seemingly competing narratives swirling together to obfuscate the real aspirations of big tech. The advocates of open-source large language models (LLMs) present themselves as civic-minded, democratic, and responsible, while closed-source proponents position themselves as the responsible stewards of secure, walled-garden AI development. Both sides dress their arguments with warnings about dire consequences if their views aren’t adopted by policymakers. ... For years, tech giants have employed scare tactics to convince policymakers that any regulation will stifle innovation, lead to economic decline, and exclude countries from the prestigious digital vanguard. These dire warnings are frequently targeted, especially in the Global South, where policymakers often lack the resources and expertise to keep pace with rapid technological advancements, including AI. Big tech’s polished lobbyists offer what seems like a reasonable solution, workable regulation" — which translates to delayed, light-touch, or self-regulation of emerging technologies. 


AI Agents: A Comprehensive Introduction for Developers

The best way to think about an AI agent is as a digital twin of an employee with a clear role. When any individual takes up a new job, there is a well-defined contract that establishes the essential elements — such as job definition, success metrics, reporting hierarchy, access to organizational information, and whether the role includes managing other people. These aspects ensure that the employee is most effective in their job and contributes to the overall success of an organization. ... The persona of an AI agent is the most crucial aspect that establishes the key trait of an agent. It is the equivalent of a title or a job function in the traditional environment. For example, a customer support engineer skilled in handling complaints from customers is a job function. It is also the persona of an individual who performs this job. You can easily extend this to an AI agent. ... A task is an extension of the instruction that focuses on a specific, actionable item within the broader scope of the agent’s responsibilities. While the instruction provides a general framework covering multiple potential actions, a task is a direct, concrete action that the agent must take in response to a particular user input.


AI in compliance: Streamlining HR processes to meet regulatory standards

With the increasing focus on data protection laws like the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and India’s Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 under the Information Technology Act, 2000, maintaining the privacy and security of employee data has become paramount. The Indian IT Privacy Law mandates that companies ensure the protection of sensitive personal data, including employee information, and imposes strict guidelines on how data must be collected, processed, and stored. AI can assist HR teams by automating data management processes and ensuring that sensitive information is stored securely and only accessed by authorized personnel. AI-driven tools can also help monitor compliance with data privacy regulations by tracking how employee data is collected, processed, and shared within the organization. ... This proactive monitoring reduces the likelihood of non-compliance and minimizes risks associated with data breaches, helping organizations align with both international and domestic privacy laws like the Indian IT Privacy Law.


Are humans reading your AI conversations?

Tools like OpenAI’s ChatGPT and Google’s Gemini are being used for all sorts of purposes. In the workplace, people use them to analyze data and speed up business tasks. At home, people use them as conversation partners, discussing the details of their lives — at least, that’s what many AI companies hope. After all, that’s what Microsoft’s new Copilot experience is all about — just vibing and having a chat about your day. But people might share data that’d be better kept private. Businesses everywhere are grappling with data security amid the rise of AI chatbots, with many banning their employees from using ChatGPT at work. They might have specific AI tools they require employees to use. Clearly, they realize that any data fed to a chatbot gets sent to that AI company’s servers. Even if it isn’t used to train genAI models in the future, the very act of uploading data could be a violation of privacy laws such as HIPAA in the US. ... Companies that need to safeguard business data and follow the relevant laws should carefully consider the genAI tools and plans they use. It’s not a good idea to have employees using a mishmash of tools with uncertain data protection agreements or to do anything business-related through a personal ChatGPT account.


CIOs recalibrate multicloud strategies as challenges remain

Like many enterprises, Ally Financial has embraced a primary public cloud provider, adding in other public clouds for smaller, more specialized workloads. It also runs private clouds from HPE and Dell for sensitive applications, such as generative AI and data workloads requiring the highest security levels. “The private cloud option provides us with full control over our infrastructure, allowing us to balance risks, costs, and execution flexibility for specific types of workloads,” says Sathish Muthukrishnan, Ally’s chief information, data, and digital officer. “On the other hand, the public cloud offers rapid access to evolving technologies and the ability to scale quickly, while minimizing our support efforts.” Yet, he acknowledges a multicloud strategy comes with challenges and complexities — such as moving gen AI workloads between public clouds or exchanging data from a private cloud to a public cloud — that require considerable investments and planning. “Aiming to make workloads portable between cloud service providers significantly limits the ability to leverage cloud-native features, which are perhaps the greatest advantage of public clouds,” Muthukrishnan says.


DevOps and Cloud Integration: Best Practices

CI/CD practices are crucial for DevOps implementation with cloud services. Continuous integration regularly merges code changes into a shared repository, where automated tests are run to spot issues early. On the other hand, continuous deployment improves this practice by automatically deploying changes (once they pass tests) to production. The CI/CD approach can accelerate the release cycle and enhance the overall quality of the software. ... Infrastructure as Code (IaC) empowers teams to oversee and provision infrastructure via code rather than manual processes. This DevOps methodology guarantees uniformity across environments and facilitates infrastructure scalability in cloud-based settings. It represents a pivotal element in transforming any enterprise's DevOps strategy. ... According to DevOps experts(link is external), security needs to be a part of every step in the DevOps process, called DevSecOps. This means adding security checks to the CI/CD pipeline, using security tools for the cloud, and always checking for security issues. DevOps professionals usually stress how important it is to tackle security problems early in the development process, called "shifting left."


Data Resilience & Protection In The Ransomware Age

Backups are considered the primary way to recover from a breach, but is this enough to ensure that the organisation will be up and running with minimal impact? Testing is a critical component to ensuring that a company can recover after a breach and provides valuable insight into the steps that the company will need to take to recover from a variety of scenarios. Unfortunately, many organisations implement measures to recover but fail on the last step of their resilience approach, namely testing. Without this step, they cannot know if their recovery strategy is effective. Testing is a critical component as it provides valuable insight into the steps it needs to take to recover, what works, and what areas it needs to focus on for the recovery process, the amount of time it will take to recover the files and more. Without this, companies will not know what processes to follow to restore data following a breach, as well as timelines to recovery. Equally, they will not know if they have backed up their data correctly before an attack if they have not performed adequate testing. Although many IT teams are stretched and struggle to find the time to do regular testing, it is possible to automate the testing process to ensure that it occurs frequently.


Is data gravity no longer centered in the cloud?

The need for data governance and security is escalating as AI becomes more prevalent. Organizations are increasingly aware of the risks associated with cloud environments, especially regarding regulatory compliance. Maintaining sensitive data on premises allows for tighter controls and adherence to industry standards, which are often critical in AI applications dealing with personal or confidential information. The convergence of these factors signals a broader reevaluation of cloud-first strategies, leading to hybrid models that balance the benefits of cloud computing with the reliability of traditional infrastructures. This hybrid approach facilitates a tailored fit for various workloads, optimizing performance while ensuring compliance and security. ... Data can exist on any platform, and accessibility should not be problematic regardless of whether data resides on public clouds or on premises. Indeed, the data location should be transparent. Storing data on-prem or with public cloud providers affects how much an enterprise spends and the data’s accessibility for major strategic applications, including AI. Currently, on-prem is the most cost-effective AI platform—for most data sets and most solutions. 


Choosing Between Cloud and On-Prem MLOps: What's Best for Your Needs?

The big benefit of cloud MLOps is the availability of virtually unlimited quantities of CPU, memory, and storage resources. Unlike on-prem environments, where resource capacity is limited by the amount of servers available and the resources each one provides, you can always acquire more infrastructure in the cloud. This makes cloud MLOps especially beneficial for ML use cases where resource needs vary widely or are unpredictable. ... On-prem MLOps may also offer better performance. On-prem environments don't require you to share hardware with other customers (which the cloud usually does), so you don't have to worry about "noisy neighbors" slowing down your MLOps pipeline. The ability to move data across fast local network connections can also boost on-prem MLOps performance, as can running workloads directly on bare metal, without a hypervisor layer reducing the amount of resources available to your workloads. ... You could also go on, under a hybrid MLOps approach, to deploy your model either on-prem or in the cloud depending on factors like how many resources inference will require. 



Quote for the day:

"You'll never get ahead of anyone as long as you try to get even with him." -- Lou Holtz

Daily Tech Digest - April 16, 2024

How to Build a Successful AI Strategy for Your Business in 2024

With a solid understanding of AI technology and your organization’s priorities, the next step is to define clear objectives and goals for your AI strategy. Focus on identifying the problems that AI can solve most effectively within your organization. These objectives should be specific, measurable, achievable, relevant, and time-bound (SMART). ... By setting well-defined objectives, you can create a targeted AI strategy that delivers tangible results and aligns with your overall business priorities. An AI implementation strategy often requires specialized expertise and tools that may not be available in-house. To bridge this gap, identify potential partners and vendors who can provide the necessary support for your AI strategy.Start by researching AI and machine learning companies that have a proven track record of working in your industry. When evaluating potential partners, consider factors such as their technical capabilities, the quality of their tools and platforms, and their ability to scale as your AI needs grow. Look for vendors who offer comprehensive solutions that cover the entire AI lifecycle, from data preparation and model development to deployment and monitoring.


Internet can achieve quantum speed with light saved as sound

When transferring information between two quantum computers over a distance—or among many in a quantum internet—the signal will quickly be drowned out by noise. The amount of noise in a fiber-optic cable increases exponentially the longer the cable is. Eventually, data can no longer be decoded. The classical Internet and other major computer networks solve this noise problem by amplifying signals in small stations along transmission routes. But for quantum computers to apply an analogous method, they must first translate the data into ordinary binary number systems, such as those used by an ordinary computer. This won't do. Doing so would slow the network and make it vulnerable to cyberattacks, as the odds of classical data protection being effective in a quantum computer future are very bad. "Instead, we hope that the quantum drum will be able to assume this task. It has shown great promise as it is incredibly well-suited for receiving and resending signals from a quantum computer. So, the goal is to extend the connection between quantum computers through stations where quantum drums receive and retransmit signals, and in so doing, avoid noise while keeping data in a quantum state," says Kristensen.


Better application networking and security with CAKES

A major challenge in enterprises today is keeping up with the networking needs of modern architectures while also keeping existing technology investments running smoothly. Large organizations have multiple IT teams responsible for these needs, but at times, the information sharing and communication between these teams is less than ideal. Those responsible for connectivity, security, and compliance typically live across networking operations, information security, platform/cloud infrastructure, and/or API management. These teams often make decisions in silos, which causes duplication and integration friction with other parts of the organization. Oftentimes, “integration” between these teams is through ticketing systems. ... Technology alone won’t solve some of the organizational challenges discussed above. More recently, the practices that have formed around platform engineering appear to give us a path forward. Organizations that invest in platform engineering teams to automate and abstract away the complexity around networking, security, and compliance enable their application teams to go faster.


AI set to enhance cybersecurity roles, not replace them

Ready or not, though, AI is coming. That being the case, I’d caution companies, regardless of where they are on their AI journey, to understand that they will encounter challenges, whether from integrating this technology into current processes or ensuring that staff are properly trained in using this revolutionary technology, and that’s to be expected. As a cloud security community, we will all be learning together how we can best use this technology to further cybersecurity. ... First, companies need to treat AI with the same consideration as they would a person in a given position, emphasizing best practices. They will also need to determine the AI’s function — if it merely supplies supporting data in customer chats, then the risk is minimal. But if it integrates and performs operations with access to internal and customer data, it’s imperative that they prioritize strict access control and separate roles. ... We’ve been talking about a skills gap in the security industry for years now and AI will deepen that in the immediate future. We’re at the beginning stages of learning, and understandably, training hasn’t caught up yet.


Why employee recognition doesn't work: The dark side of boosting team morale

Despite the importance of appreciation, many workplaces prioritise performance-based recognition, inadvertently overlooking the profound impact of genuine appreciation. This preference for recognition over appreciation can lead to detrimental outcomes, including conditionality and scarcity. Conditionality in recognition arises from its link to past achievements and performance outcomes. Employees often feel pressured to outperform their peers and surpass their past accomplishments to receive recognition, fostering a hypercompetitive work environment that undermines collaboration and teamwork. Furthermore, the scarcity of recognition exacerbates this issue, as tangible rewards such as bonuses or promotions are limited. In this competitive landscape, employees may feel undervalued, leading to disengagement and disillusionment. To foster an inclusive and supportive workplace culture, organisations must recognise the intrinsic value of appreciation alongside performance-based recognition. Embracing appreciation cultivates a culture of gratitude, empathy, and mutual respect, strengthening interpersonal connections and boosting employee morale.


Improving decision-making in LLMs: Two contemporary approaches

Training LLMs in context-appropriate decision-making demands a delicate touch. Currently, two sophisticated approaches posited by contemporary academic machine learning research suggest alternate ways of enhancing the decision-making process of LLMs to parallel those of humans. The first, AutoGPT, uses a self-reflexive mechanism to plan and validate the output; the second, Tree of Thoughts (ToT), encourages effective decision-making by disrupting traditional, sequential reasoning. AutoGPT represents a cutting-edge approach in AI development, designed to autonomously create, assess and enhance its models to achieve specific objectives. Academics have since improved the AutoGPT system by incorporating an “additional opinions” strategy involving the integration of expert models. This presents a novel integration framework that harnesses expert models, such as analyses from different financial models, and presents it to the LLM during the decision-making process. In a nutshell, the strategy revolves around increasing the model’s information base using relevant information. 


Unpacking the Executive Order on Data Privacy: A Deeper Dive for Industry Professionals

For privacy professionals, the order underscores the ongoing challenge of protecting sensitive information against increasingly sophisticated threats. That’s important, and shouldn’t be overlooked. Yet the White House has admitted that this order isn’t a silver bullet for all the nation’s data privacy challenges. That candor is striking. It echoes a sentiment familiar to many of us in the industry: the complexities of protecting personal information in the digital age cannot be fully addressed through singular measures against external threats. Instead, this task requires a long-term, thoughtful, multi-faceted approach – one that also confronts the internal challenges to data privacy posed by Big Tech, domestic data brokers, and foreign governments that exist outside of the designated “countries of concern” category. ... The extensive collection, usage, and sale of personal data by domestic entities—including but not limited to Big Tech companies, data brokers, and third-party vendors—poses significant risks. These practices often lack transparency and accountability, fueling privacy breaches, identity theft, and eroding public trust and individual autonomy.


10 tips to keep IP safe

CSOs who have been protecting IP for years recommend doing a risk and cost-benefit analysis. Make a map of your company’s assets and determine what information, if lost, would hurt your company the most. Then consider which of those assets are most at risk of being stolen. Putting those two factors together should help you figure out where to best spend your protective efforts (and money). If information is confidential to your company, put a banner or label on it that says so. If your company data is proprietary, put a note to that effect on every log-in screen. This seems trivial, but if you wind up in court trying to prove someone took information they weren’t authorized to take, your argument won’t stand up if you can’t demonstrate that you made it clear that the information was protected. ... Awareness training can be effective for plugging and preventing IP leaks, but only if it’s targeted to the information that a specific group of employees needs to guard. When you talk in specific terms about something that engineers or scientists have invested a lot of time in, they’re very attentive. As is often the case, humans are often the weakest link in the defensive chain. 


Types of Data Integrity

Here are a few data integrity issues and risks many organizations face: Compromised hardware: Power outages, fire sprinklers, or a clumsy person knocking a computer to the floor are examples of situations that can cause the loss of vital data or its corruption. Security considers compromised hardware to be hardware that has been hacked. Cyber threats: Cyber security attacks – phishing attacks, malware – present a serious threat to data integrity. Malicious software can corrupt or alter critical data within a database. Additionally, hackers gaining unauthorized access can manipulate or delete data. If changes are made as a result of unauthorized access, it may be a failure in data security. ... Human error: A significant source of data integrity problems is human error. Mistakes that are made during manual entries can produce inaccurate or inconsistent data that then gets stored in the database. Data transfer errors: During the transfer of data, data integrity can be compromised. Transfer errors can damage data integrity, especially when moving massive amounts of data during extract, transform, and load processes, or when moving the organization’s data to a different database system.


Sisense Breach Highlights Rise in Major Supply Chain Attacks

Many of the details of the attack are not yet clear, but the breach may have exposed hundreds of Sisense's prominent customers to a supply chain attack that gave hackers a backdoor into the company's customer networks, a CISA official told Information Security Media Group. Experts said the attack suggests trusted companies are still failing to implement proactive defensive measures to spot supply chain attacks - such as robust access controls, real-time threat intelligence and regular security assessments - at a time when organizations are increasingly reliant on interconnected ecosystems. "These types of software supply chain attacks are only possible through compromised developer credentials and account information from an employee or contractor," said Jim Routh, chief trust officer for the software security company Saviynt. The breach highlights the need for enterprises to improve their identity access management capabilities for cloud-based services and other third parties, he said. Security intelligence platform Censys published insights into the Sisense breach Friday.



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - January 23, 2024

How human robot collaboration will affect the manufacturing industry

Traditional manufacturing systems frequently struggle to adjust to shifting demands and product variances. Human-robot collaboration gives flexibility, which is critical in today’s market. Robots are easily programmed and reprogrammed, allowing firms to quickly alter production lines to suit new goods or design changes. This adaptability is critical in an era where customer preferences shift quickly, and companies are trying to work in line with the shifting preferences of the customers. ... While the initial investment in robotics technology may be significant, the long-term cost savings from human-robot collaboration are attractive. Automated procedures in the manufacturing industries lower labor costs, boost productivity, and reduce errors to a great extent, resulting in a more cost-effective manufacturing operation. ... There is a notion that automation will replace human occupations, on the contrary, the collaboration is intended to supplement human abilities. Human workers may focus on critical thinking, problem-solving, and creativity by automating mundane and physically demanding jobs.


Mastering System Design: A Comprehensive Guide to System Scaling for Millions

Horizontal scaling emerges as a strategic solution to accommodate increasing demands and ensure the system’s ability to handle a burgeoning user base. Horizontal scaling involves adding more servers to the system and distributing the workload across multiple machines. Unlike vertical scaling, which involves enhancing the capabilities of a single server, horizontal scaling focuses on expanding the server infrastructure horizontally. One of the key advantages of horizontal scaling is its potential to improve system performance and responsiveness. By distributing the workload across multiple servers, the overall processing capacity increases, alleviating performance bottlenecks and enhancing the user experience. Moreover, horizontal scaling offers improved fault tolerance and reliability. The redundancy introduced by multiple servers reduces the risk of a single point of failure. In the event of hardware issues or maintenance requirements, traffic can be seamlessly redirected to other available servers, minimizing downtime and ensuring continuous service availability. Scalability becomes more flexible with horizontal scaling. 


Backup admins must consider GenAI legal issues -- eventually

LLMs requiring a massive amount of data and, by proxy, dipping into nebulous legal territory is inherent to GenAI services contracts, said Andy Thurai, an analyst at Constellation Research. Many GenAI vendors are now offering indemnity or other legal protections for customers. ... "It's a [legal] can of worms that enterprises can't afford to open," Thurai said. Unfortunately for enterprise legal teams, the need to create guidance is fast approaching. Lawsuits by organizations such as the New York Times are looking to take back IP control and copyright from the OpenAI's proprietary and commercial LLM model. Those suits are entirely focused on the contents of data itself rather than the mechanics of backup and storage that backup admins would concern themselves with, said Mauricio Uribe, chair of the software/IT and electrical practice groups at law firm Knobbe Martens. The business advantages of GenAI within backup technology are still unproven and unknown, he added. Risks such as patent infringement remain a possibility. Backup vendors are implementing GenAI capabilities such as support chatbots into their tools now, such as Rubrik's Ruby and Cohesity's Turing AI. But neither incorporates enterprise customer data or specific customer information, according to both vendors.


CFOs urged to reassess privacy budgets amid rising data privacy concerns

The ISACA Privacy in Practice 2024 survey report reveals that only 34% of organizations find it easy to understand their privacy obligations. This lack of clarity can lead to non-compliance and increased risk of data breaches. Additionally, only 43% of organizations are very or completely confident in their privacy team’s ability to ensure data privacy and achieve compliance with new privacy laws and regulations. ... To address the challenges outlined in the survey, organizations are taking proactive steps to strengthen their privacy programs. Training plays a crucial role in mitigating workforce gaps and privacy failures. Half of the respondents (50%) note that they are training non-privacy staff to move into privacy roles, while 39% are increasing the usage of contract employees or outside consultants. Organizations are also investing in privacy awareness training for employees. According to the survey, 86% of organizations provide privacy awareness training, with 66% offering training to all employees annually. Moreover, 52% of respondents provide privacy awareness training to new hires. 


Cisco sees headway in quantum networking, but advances are slow

Cisco has said that it envisions quantum data centers that could use classic LAN models to tie together quantum computers, or a quantum-based network that transmits quantum bits (qubits) from quantum servers at high-speeds to handle commercial-grade applications. “Another trend will be the growing importance of quantum networking which in 4 or 5 years – perhaps more – will enable quantum computers to communicate and collaborate for more scalable quantum solutions,” Centoni stated. “Quantum networking will leverage quantum phenomena such as entanglement and superposition to transmit information.” The current path for quantum researchers and developers is to continue to grow radix, expand mesh networking (the ability for network fabrics to support many more connections per port and higher bandwidth), and create quantum switching and repeaters, Pandey said. “We want to be able to carry quantum signals over longer distances, because quantum signals deteriorate rapidly,” he said. “We definitely want to enable them to handle those signals within a data center footprint, and that’s technology we will start experimenting on.”


Navigating the Digital Transformation: The Role of IT

While many acknowledged engaging with the six core elements of the Rewired framework, few participants considered themselves frontrunners in significant progress. This underscores the complexity and ongoing nature of digital transformation, necessitating continuous adaptation across leadership, culture, and technology. Organizations are directing efforts towards both front-end (customer experience) and back-end (operational optimization), recognizing the interconnected nature of digital transformation. Success stories include consolidating Robotic Process Automation (RPE), Artificial Intelligence (AI), and low-code development within a single organizational department. This integration facilitates synergies and holistic advancements in digital capabilities. The evolving nature of ERP transformations was also discussed, with a shift towards continuous improvements and a focus on operating models and ways of working, moving beyond purely technological considerations. The insights from this roundtable underscore the multifaceted nature of digital transformation.


Harvard Scientists Discover Surprising Hidden Catalyst in Human Brain Evolution

“Brain tissue is metabolically expensive,” said the Human Evolutionary Biology assistant professor. “It requires a lot of calories to keep it running, and in most animals, having enough energy just to survive is a constant problem.” For larger-brained Australopiths to survive, therefore, something must have changed in their diet. Theories put forward have included changes in what these human ancestors consumed or, most popularly, that the discovery of cooking allowed them to garner more usable calories from whatever they ate. ... The shift was probably a happy accident. “This was not necessarily an intentional endeavor,” Hecht posited. “It may have been an accidental side effect of caching food. And maybe, over time, traditions or superstitions could have led to practices that promoted fermentation or made fermentation more stable or more reliable.” This hypothesis is supported by the fact that the human large intestine is proportionally smaller than that of other primates, suggesting that we adapted to food that was already broken down by the chemical process of fermentation. 


Digital Personal Data Protection Act marks a new era of business-friendly governance

Surprising the business community, the DPDP Act 2023 removed the data localization requirements, marking a significant departure from the previous iterations of the Act. The earlier DPDP Bills required certain categories of personal data to be stored and processed within the country. The provision faced staunch global opposition, particularly from the US, which criticized India's requirements as discriminatory and trade distortive. In contrast, the DPDP Act, 2023 adopts a more inclusive approach, granting firms autonomy in the choice and location of cloud services for storing and processing personal data of their users. By prioritizing cost-effectiveness and competitiveness for the firms, the removal of data localisation requirements signals a more accommodating government stance. In addition to scrapping data localization requirements, the DPDP Act 2023 also allows unrestricted cross-border transfer of Indian users’ personal data abroad, barring certain destination countries. Firms would not be required to conduct post-transfer impact assessments or to ensure that the destination country has similar data protection standards– mandated in other jurisdictions like the EU and Vietnam. 


Cybersecurity: The growing partnership between HR and risk management

HR professionals themselves can also be attractive targets to bad actors. The access they have to sensitive employee and company data can be a goldmine for hackers, putting a target on the back of those within the HR organization. As such, HR leaders should put proactive, pre-breach policies in place for their own functional colleagues. Policies might include contacting internal and external parties who ask for changes to sensitive information, such as invoice numbers, email passwords, direct deposit details, and software updates. They should also include policies for remote workers and incidence response. ... When you purchase cyber insurance, you get access to pre-breach planning and policy templates, which for many organizations, is just as important as the breach coverage. While the optimal amount of insurance depends on many factors — including size, revenues, number of employees and access to confidential information — HR organizations of all sizes and structures benefit from pre-breach planning and policymaking.


IT services spending signals major role change for CIOs ahead

“This evolution in what CIOs do, the value proposition they bring to the company, is evident in the long-term playout. But it is not yet as evident to the CIOs themselves,” Lovelock said. He sees CIOs still thinking they are riding the same talent waves of the past, facing a temporary problem that they will solve: that their staff will come back, that hiring will resume, that attrition rates will decline, and that they will be able to attract the skills they need at prices they can afford. “It doesn’t look like they will ever be able to do that. There are too many things IT staff with these key resources and skills are looking for that are outside of the CIO’s control to deliver,” he said. With increasing reliance on IT services and consulting to deliver outcomes ranging from commoditized customer support to differentiating generative AI implementations, the CIO role may soon become less about being that one-stop shop for business support, overseeing project and products developed in-house, and more about weaving together myriad services undertaken by an increasingly heterogeneous mix of talent sources, predominantly beyond the CIO’s direct purview.



Quote for the day:

''Thinking is easy acting is difficult, and to put one's thoughts into action is the most difficult thing in the world.'' -- Johann Wolfgang von Goethe