Daily Tech Digest - November 25, 2024

GitHub Copilot can make inline code suggestions in several ways. Give it a good descriptive function name, and it will generate a working function at least some of the time—less often if it doesn’t have much context to draw on, more often if it has a lot of similar code to use from your open files or from its training corpus. ... Test generation is generally easier to automate than initial code generation. GitHub Copilot will often generate a reasonably good suite of unit tests on the first or second try from a vague comment that includes the word “tests,” especially if you have an existing test suite open elsewhere in the editor. It will usually take your hints about additional unit tests, as well, although you might notice a lot of repetitive code that really should be refactored. Refactoring often works better in Copilot Chat. Copilot can also generate integration tests, but you may have to give it hints about the scope, mocks, specific functions to test, and the verification you need. ... GitHub Copilot Code Reviews can review your code in two ways, and provide feedback. One way is to review your highlighted code selection (Visual Studio Code only, open public preview, any program­ming language), and the other is to more deeply review all your changes. Deep reviews can use custom coding guidelines.


Closed loop optimisation: Opening a world of advantages for marketers

In marketing, closed loop optimisation refers to the collection and analysis of various data across the marketing lifecycle or customer journey to create a continuous cycle of learning and data-led decision-making. By closing the customer journey loop, starting with the first interaction all the way to “post-sale”, brand marketers can evaluate the effectiveness of advertising campaigns and channels, and deploy their resources in initiatives that deliver the best outcomes. ... With advanced analytics solutions, marketing organisations can process structured and unstructured data from internal and external sources to identify emerging trends, customer needs and behaviours, and other metrics that can inform brand strategies. When a health technology company understood with the help of analytics that user-generated content was a key factor in strengthening interactions with customers, it changed the content strategy to include user feedback, and thereby fostered a sense of community, improved credibility, and elevated the brand experience to substantially increase social media engagement within eighteen months. A top U.S. professional basketball team used predictive analytics to uncover new trends and understand the type of content that would resonate best with fans around the world.


The rise of autonomous enterprises: how robotics, AI, and automation are reshaping the workforce of tomorrow

An autonomous enterprise is an organisation that has successfully implemented the best application of automation technologies to function with minimal human intervention in most aspects. From routine administrative tasks to complex decision-making processes, autonomous enterprises leverage AI, ML, and RPA to drive efficiency, accuracy, and agility. Companies across sectors such as manufacturing, healthcare, logistics, and more, are looking towards automation to streamline operations, reduce costs, and innovate. ... As human-machine collaboration grows, there is an increasing need for employers and educational institutions to address reskilling and upskilling to prepare the workforce in continuously changing labour markets. This does not mean this work will eliminate human jobs but will definitely require more creativity, critical thinking, and emotional intelligence among human employees—the very qualities AI cannot encapsulate. ... As Robotics and AI continue to revolutionise the world the ethical and governance challenges arising from it have to be responded, proactively and thoughtfully. Privacy, bias, and accountability issues have to be strongly addressed so that these technologies are developed and deployed appropriately. 


Overcoming legal and organizational challenges in ethical hacking

A professional ethical hacker must have a broad understanding of various IT systems, networking, and protocols – essentially, a deep “under the hood” knowledge. This foundational expertise allows them to navigate different environments effectively. Additionally, target-specific knowledge is crucial, as the security measures and vulnerabilities can vary significantly based on the technology stack in use. ... AI and machine learning can significantly enhance ethical hacking efforts. On the offensive side, automated processes supported by AI can efficiently identify vulnerabilities and suggest areas for further manual security testing. This streamlines the initial phases of penetration testing and helps uncover potential issues more effectively. Additionally, AI can assist in generating detailed penetration testing reports, saving time and ensuring accuracy. On the defensive side, AI and machine learning are invaluable for detecting anomalies and correlating data to identify potential threats. These technologies enable a proactive approach to cybersecurity, enhancing both offensive and defensive strategies. By using AI and machine learning, ethical hackers can improve their effectiveness. 


Why The Gig Economy Is A Key Target For API Attacks

One of the most difficult attacks to prevent is business logic abuse. Strictly speaking, it isn’t an attack at all. Business logic abuse sees the functionality of the API used against it, so that a task it is supposed to execute is then used to carry out an attack. It might be use to subvert access control, for instance, with attackers manipulating URLs, session tokens, cookies, or hidden fields to gain advanced privileges and access sensitive data or functionality. Or bots may attempt to repeatedly sign up, login, or execute purchases in order to validate credentials, access unauthorised data, or commit fraud. Perhaps flaws in session tokens or poor handling of session data allows the attacker to hijack sessions and escalate privileges. Or the attacker may try to bypass built-in constraints to business logic by reviewing points of entry such as form fields and coming up with inputs that the developers may not have planned for. ... Legacy app defences rely on embedding javascript code into end-user applications and devices, which slows deployment and leaves platforms vulnerable to reverse engineering. Some of this code, such as CAPTCHAs, also introduces customer friction. 


From Contractors to OAuth: Emerging SDLC Threats for 2025

Outsourcing software development is common practice but opens the door to significant security risks when not properly managed. These outsourced operations lack the same stringent security measures applied to internal teams, creating blind spots that attackers can easily leverage. A common vulnerability in this scenario is the over-provisioning of access rights. ... Poorly configured CI/CD pipelines are another critical weakness. When organizations outsource software development, they often have little visibility into the security practices of their contractors’ environments. Attackers can exploit poorly configured pipelines to access source code or manipulate software delivery processes. ... Preventing OAuth phishing can be difficult because it exploits user behavior rather than traditional technical vulnerabilities. While phishing training is essential, the best defense is limiting the damage attackers can cause if they gain access. By restricting developer entitlements to only what is necessary for their role, organizations can reduce the impact of a compromised account and prevent broader system breaches. ... The most catastrophic SDLC security breaches in 2025 may not stem from technical vulnerabilities but from poorly managed development teams.


In a Growing Threat Landscape, Companies Must do Three Things to Get Serious About Cybersecurity

From a practical standpoint, execs and the board make budget decisions about every domain, including security. Unlike other domains, cybersecurity isn’t a profit center for most businesses, so it often gets underfunded compared to business units and projects that generate revenue. That’s a problem. If executives understand how much is at stake from a fundamental business level, they will invest in bolstering their cybersecurity posture. Cybersecurity is essential to protecting profit centers and enabling them to safely grow. And more and more, customers are looking at a company’s security bonafide when making their buying decisions. It’s in the execs’ self-interest to take charge in adopting a cybersecurity posture as they will ultimately be held accountable in the event of catastrophe. ... It’s also essential to have an honest, objective CISO at the helm of cybersecurity who has power at the executive table. The C-suite and board won’t ever know how to effectively prioritize security unless they have a CISO guiding them accordingly. Communication is central here. There has to be open discussion between the CISO and the rest of the C-suite regularly. 


Perimeter Security Is at the Forefront of Industry 4.0 Revolution

Perimeter security is crucial for military, government organizations and other business enterprises alike to detect potential threats, deter the possible intruders, and delay the illegal attempts which the intruders make while breaching in a secured area/perimeter. Additionally, perimeter security maintains the operational continuity within these organizations. To prevent unauthorized entry to the premises, high-security associations, commercial centers, government facilities and other organizations can establish a physical barrier utilizing detection and deterrence techniques.... The effectiveness of the perimeter security system depends upon several factors such as design and implementation of the security measures, proper integration of physical and electronic devices and expertise of a well-trained personnel. A well-designed perimeter security system should provide a comprehensive coverage of any building/premise with multiple layers of security which can be effective against intruders/thieves in creating obstacles. Regular maintenance and testing of the perimeter security system is necessary to ensure their continued efficiency. It is critical to continuously assess and expand perimeter security measures in order to counter different types of threats and hazards.


5 Trends Reshaping the Data Landscape

Before companies can successfully leverage AI and advanced analytics, it’s urgent to address the “runaway data movement and data pipeline challenges that are so common in enterprises,” he pointed out. “When you think about data movement and data pipelines, most customers have transactional systems or legacy environments that then feed data to downstream systems. Or they’re getting a firehose of data from a variety of sources that are coming from the cloud, and they can be batch or streaming data.” What happens is these organizations “take that data and transform or consume it by multiple business units using their own extract, transform, and load (ETL) solutions,” he illustrated. “They can be completely different types of data. This is typically the first kind of deviation or loss of a unified source of truth for the data.” The ETL solutions that each group manages “have their own user acceptance testing or production environments, which means more copies of data,” he pointed out. “Then that data is fed to multiple systems, maybe for dashboarding or for more low-latency analytics. But it’s also fed to their systems, like OLAP systems or data lakes.” If a data team “can’t get the data where it needs to go, they’re not going to be able to analyze it in an efficient, secure way,” he said.


Top challenges holding back CISOs’ agendas

With limited resources and an ever-growing list of threats, CISOs are often caught managing multiple projects at once. Some of these might move forward bit by bit, but without clear milestones or measurable progress, it’s difficult to show their real impact. This makes it harder for CISOs to secure extra funding or support, especially when stakeholders can’t see solid, tangible results. “That makes it almost impossible to show meaningful success,” says John Terrill, CSO at Phosphorus. “A lot of times, this can come from trying to boil the ocean.” Many CISOs recommend learning to “speak business” and occasionally scaring the board to get more funding, but these can only go so far. “The company has a finite amount of resources; you need to make peace with that,” Avivi says. ... “Aligning both the workforce and the organization’s leadership around risk appetite helps tremendously to focus your energy and your dollars in the places that most need them,” says Ken Deitz, CISO at Secureworks. “If an organization has a stated risk appetite for security risk, the priorities start to jump off the page.” CISOs should be open about the risk the organization will take if their priorities are not addressed. 



Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - November 24, 2024

AI agents are unlike any technology ever

“Reasoning” and “acting” (often implemented using the ReACT — Reasoning and Acting) framework) are key differences between AI chatbots and AI agents. But what’s really different is the “acting” part. If the main agent LLM decides that it needs more information, some kind of calculation, or something else outside the scope of the LLM itself, it can choose to solve its problem using web searches, database queries, calculations, code execution, APIs, and specialized programs. ... Since the dawn of computing, the users who used software were human beings. With agents, for the first time ever, the software is also a user who uses software. Many of the software tools agents use are regular websites and applications designed for people. They’ll look at your screen, use your mouse to point and click, switch between windows and applications, open a browser on your desktop, and surf the web — in fact, all these abilities exist in Anthropic’s “Computer Use” feature. Other tools that the agent can access are designed exclusively for agent use. Because agents can access software tools, they’re more useful, modular, and adaptable. Instead of training an LLM from scratch, or cobbling together some automation process, you can instead provide the tools the agent needs and just let the LLM figure out how to achieve the task at hand.


Live On the Edge

Why live on the edge now? Because, despite public cloud usage being ubiquitous, many deployments are ad hoc and poorly implemented. “The focus of refactoring cloud infrastructure should be on optimizing costs by eliminating redundant, overbuilt or unused cloud infrastructure,” says Gartner. ... Can edge computing also benefit the environment? Yes, according to a study by IBM Corp. “One direct way is by using edge computing to monitor protected species of wildlife inhabiting remote places,” IBM says. “Edge computing can help wildlife officials and park rangers identify and stop poaching activities, sometimes before these offenses even occur.” Another relates to energy management. “Edge computing supports the use of smart grids, which can deliver energy more efficiently and help businesses leave a smaller carbon footprint,” IBM notes. “Grid or distributed computing is where a group of machines and networks work together for a common computing purpose. Resources are utilized in an optimized manner, thus reducing the amount of waste that can occur when large quantities of power are consumed.” More significantly, edge computing can also support the remote monitoring of oil and gas assets. 


Getting started with AI agents (part 1): Capturing processes, roles and connections

An organizational chart might be a good place to start, but I would suggest starting with workflows, as the same people within an organization tend to act with different processes and people depending on workflows. There are available tools that use AI to help identify workflows, or you can build your own gen AI model. I’ve built one as a GPT which takes the description of a domain or a company name and produces an agent network definition. Because I’m utilizing a multi-agent framework built in-house at my company, the GPT produces the network as a Hocon file, but it should be clear from the generated files what the roles and responsibilities of each agent are and what other agents it is connected to. Note that we want to make sure that the agent network is a directed acyclic graph (DAG). This means that no agent can simultaneously become down-chain and up-chain to any other agent, whether directly or indirectly. This greatly reduces the chances that queries in the agent network fall into a tailspin. In the examples outlined here, all agents are LLM-based. If a node in the multi-agent organization can have zero autonomy, then that agent paired with its human counterpart, should run everything by the human. 


Preparing Project Managers for an AI-Driven Future

Right now, about 95% of AI conversations are around tools that help people do their jobs better, like ChatGPT or other large language models. For most project managers, AI can be a huge timesaver. Think of it as a tool that takes on repetitive tasks—like summarizing meeting notes or helping with scheduling—so you can focus on higher-value work. ... AI can free you up to focus on the strategic parts of your job. It’s not here to replace project managers; it’s here to make them more efficient. At this moment, a lot of people are using AI from a personal or group productivity perspective. But they are increasingly going to depend on AI as part of their team. You’re already managing more AI than you might think. And in the future, you’ll be managing a lot more. Some things will be done by people and some things will be done by machines and we need to make sure the whole thing is happening in a totally planned way. ... First thing to understand is that AI projects are data projects. If you’re used to traditional software projects, where functionality is front and center, AI is different. AI relies on data quality—"garbage in, garbage out,” as they say. Your primary focus needs to be on getting the right data in and managing the outputs, which are data as well.


Making quantum computing accessible through decentralization

A decentralized model for quantum computing sidesteps many of these challenges. Rather than relying on centralized hardware-intensive setups, it distributes computational tasks across a global network of nodes. This approach taps into existing resources—standard GPUs, laptops, and servers—without needing the extreme cooling or complex facilities required by traditional quantum hardware. Instead, this decentralized network forms a collective computational resource capable of solving real-world problems at scale using quantum techniques. This decentralized Quantum-as-a-Service approach emulates the behaviors of quantum systems without strict hardware demands. By decentralizing the computational load, these networks achieve a comparable level of efficiency and speed to traditional quantum systems—without the same logistical and financial constraints. ... Decentralized quantum computing represents a transformative shift in how we approach advanced problem-solving. By leveraging accessible infrastructure and distributing tasks across a global network, powerful computing is brought within reach of many who were previously excluded. 


Data Security vs. Cyber Security – Why the Difference Matters

Cybersecurity is the practice of safeguarding digital systems, networks, and programs from attacks that aim to steal, alter, or destroy sensitive data, extort money through ransomware, or disrupt business operations. Despite a substantial $183 billion investment in traditional security measures in 2023 and projections indicating a 14% increase in these security budgets for 2024, data breaches surged by 78%, reaching a record high. ... Data is the most valuable commodity of a company, yet we don’t see resource allocation and time investment in data security reflecting this importance. Data security involves protecting the data itself. Once protected, the data can travel anywhere and remain protected. Having the fine granularity to safeguard the data allows you to grant users the minimum access necessary for their job functions. When someone does need to use the data, they must be authorized to do so. ... Zero trust data protection techniques significantly enhance data security posture and business value. The first step to improving security and data value is identifying the most at-risk yet least accessed data. It’s essential to assess the need for clear-text visibility of high-risk data across people, processes, and systems and to consider the business impact of minimizing this risk, including factors like regulatory compliance, reputation, and insurance.


Is Your Phone Spying On You? How to Check and What to Do

“For years, people have noticed advertisements for products they recently discussed in conversation — even without searching for them online — suddenly appear on their devices. While many dismissed this as a coincidence or attributed it to targeted advertising based on online searches, it turns out there’s more to the story. According to a report by 404 Media, a marketing firm has confirmed that smartphones are not just tracking users' online activity — they are also listening to what you say out loud, near your phone. “Smartphones might indeed be listening to our conversations, thanks to a technology known as “active listening.” This unsettling discovery comes after a marketing firm, whose clients include tech giants like Google and Facebook, admitted to using software that monitors users’ conversations through the microphones of their devices. This admission has raised serious questions about privacy, user consent, and the ethics of targeted advertising. … For better or for worse, there is generally nothing illegal about using audio information to target advertising. While it is obviously illegal to spy on someone without their consent, most phone users have given their permission for this practice without knowing, according to legal experts.


CNCF Brings Jaegar and OpenTelemetry Closer Together to Improve Observability

In the wake of adding support for OpenTelemetry, the project is now working on revamping the user interface for Jaegar to make that data more easily discoverable in addition to normalizing dependency views. In addition, the project is moving toward adding support for the Storage v2 interface to consume OpenTelemetry data natively along with adding support for ClickHouse as the official storage backend for tracing data. Finally, the project intends to add support for Helm Charts and an Operator that will make deploying Jaegar on Kubernetes clusters simpler. ... The challenge, of course, has been first finding the funding for observability initiatives, followed then by the issues that arise as DevOps teams move to consolidate tooling. Many software engineers naturally become attached to a particular monitoring tool. Convincing them to swap it out for another platform requires effort and, most importantly, training. Each organization will individually decide to what degree they may want to drive tool consolidation, however, in many cases, the cost of acquiring an observability platform assumes savings will be generated by eliminating the need for other tools.


Zero Days Top Cybersecurity Agencies' Most-Exploited List

The prevalence of zero-day vulnerabilities on this year's list is a reminder that attackers regularly seek ways of exploiting widely used types of software and hardware before vendors identify the underlying flaw and fix it. The joint security advisory also details guidance prepared by CISA and the National Institute of Standards and Technology designed to improve organizations' cyber resilience to better combat all types of cybersecurity threats. Specific recommendations also include regularly using automated asset discovery to find all of the hardware, software, systems and services inside an IT organization's estate and locking them down as much as possible; prepping and testing incident response plans; and keeping regular, secure backups of copies which get stored off-network to facilitate rapid repair and restoration of systems. The guidance also recommends implementing zero trust network architecture, using phishing-resistant multifactor authentication as an identity and access management control, enforcing least-privileged access, and reducing the number of third-party applications and unique types of builds used.


Achieving Optimal Outcomes in Security Through Platformization

Platformization unifies multiple solutions and services into a single architecture with a shared data store and streamlined management. With native integrations, each component becomes more powerful than standalone products. This approach helps increase productivity, simplify operations, and extract the most value from data, all leading to better security outcomes and greater efficiency. ... Using the platform approach should never entail giving up security efficacy for the sake of vendor consolidation or simplified management. If there is a corresponding set of point products in a given area, the minimum bar by which the “platform” component must be measured is the very best of those individual tools. Flexibility and scalability are important. A platform needs to empower your company to gradually grow into using it. A total “rip and replace” of multiple security tools at once is far more complex than most enterprises are willing to attempt. It’s even harder when you factor in the differing replacement cycles of existing solutions. You need the option to adopt the platform piece by piece or all at once – whichever suits your organization best – while retaining the ability to cover all your security bases.



Quote for the day:

“Opportunities don’t happen, you create them.” -- Chris Grosser

Daily Tech Digest - November 23, 2024

AI Regulation Readiness: A Guide for Businesses

The first thing to note about AI compliance today is that few laws and other regulations are currently on the books that impact the way businesses use AI. Most regulations designed specifically for AI remain in draft form. That said, there are a host of other regulations — like the General Data Protection Regulation (GDPR), the California Privacy Rights Act (CPRA), and the Personal Information Protection and Electronic Documents Act (PIPEDA) — that have important implications for AI. These compliance laws were written before the emergence of modern generative AI technology placed AI onto the radar screens of businesses (and regulators) everywhere, and they mention AI sparingly if at all. But these laws do impose strict requirements related to data privacy and security. Since AI and data go hand-in-hand, you can't deploy AI in a compliant way without ensuring that you manage and secure data as current regulations require. This is why businesses shouldn't think of AI as an anything-goes space due to the lack of regulations focused on AI specifically. Effectively, AI regulations already exist in the form of data privacy rules. 


Cloud vs. On-Prem AI Accelerators: Choosing the Best Fit for Your AI Workloads

Like most types of hardware, AI accelerators can run either on-prem or in the cloud. An on-prem accelerator is one that you install in servers you manage yourself. This requires you to purchase the accelerator and a server capable of hosting it, set them up, and manage them on an ongoing basis. A cloud-based accelerator is one that a cloud vendor makes available to customers over the internet using an IaaS model. Typically, to access a cloud-based accelerator, you'd choose a cloud server instance designed for AI. For example, Amazon offers EC2 cloud server instances that feature its Trainium AI accelerator chip. Google Cloud offers Tensor Processing Units (TPUs), another type of AI accelerator, as one of its cloud server options. ... Some types of AI accelerators are only available through the cloud. For instance, you can't purchase the AI chips developed by Amazon and Google for use in your own servers. You have to use cloud services to access them. ... Like most cloud-based solutions, cloud AI hardware is very scalable. You can easily add more AI server instances if you need more processing power. This isn't the case with on-prem AI hardware, which is costly and complicated to scale up.


Platform Engineering Is The New DevOps

Platform engineering has provided a useful escape hatch at just the right time. Its popularity has grown strongly, with a well-attended inaugural platform engineering day at KubeCon Paris in early 2024 confirming attendee interest. A platform engineering day was part of the KubeCon NA schedule this past week and will also be included at next year’s KubeCon in London. “I haven't seen platform engineering pushed top down from a C-suite. I've seen a lot of guerilla stuff with platform and ops teams just basically going out and doing a skunkworks thing and sneaking it into production and then making a value case and growing from there,” said Keith Babo, VP of product and marketing at Solo.io. ... “If anyone ever asks me what’s my definition of platform engineering, I tend to think of it as DevOps at scale. It’s how DevOps scales,” says Kennedy. The focus has shifted away from building cloud native technology, done by developers, to using cloud native technology, which is largely the realm of operations. That platform engineering should start to take over from DevOps in this ecosystem may not be surprising, but it does highlight important structural shifts.


Artificial Intelligence and Its Ascendancy in Global Power Dynamics

According to the OECD, AI is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments.” The vision for Responsible AI is clear: establish global auditing standards, ensure transparency, and protect privacy through secure data governance. Yet, achieving Responsible AI requires more than compliance checklists; it demands proactive governance. For example, the EU’s AI Act takes a hardline approach to regulating high-risk applications like real-time biometric surveillance and automated hiring processes, whereas the U.S., under President Biden’s Executive Order on Safe, Secure, and Trustworthy AI, emphasizes guidelines over strict enforcement. ... AI is becoming the lynchpin of cybersecurity and national security strategies. State-backed actors from China, Iran, and North Korea are weaponizing AI to conduct sophisticated cyber-attacks on critical infrastructure. The deployment of Generative Adversarial Networks (GANs) and WormGPT is automating cyber operations at scale, making traditional defenses increasingly obsolete. In this context, a cohesive, enforceable framework for AI governance is no longer optional but essential. 


Why voice biometrics is a must-have for modern businesses

Voice biometrics are making waves across multiple industries. Here’s a look at how different sectors can leverage this technology for a competitive edge:Financial services: Banks and financial institutions are actively integrating voice verification into call centers, allowing customers to authenticate themselves with their voice, eliminating the need for secret words or pin codes. This strengthens security, reduces time and cost per customer call and enhances the customer experience. Automotive: With the rise of connected vehicles, voice is already heavily used with integrated digital assistants that provide handsfree access to in-car services like navigation, settings and communications. Adding voice recognition allows such in car services to be personalized for the driver and opens the possibilities of more enhancements such as commerce. Automotive brands can integrate voice recognition for offering seamless access to new services like parking, fueling, charging, curbside pick-up by utilizing in-car payments that boost security, convenience and customer satisfaction. Healthcare: Healthcare providers can use voice authentication to securely verify patient identities over the phone or via telemedicine. This ensures that sensitive information remains protected, while providing a seamless experience for patients who may need hands-free options.


When and Where to Rate-Limit: Strategies for Hybrid and Legacy Architectures

While rate-limiting is an essential tool for protecting your system from traffic overloads, applying it directly at the application layer — whether for microservices or legacy applications — is often a suboptimal strategy. ... Legacy systems operate differently. They often rely on vertical scaling and have limited flexibility to handle increased loads. While it might seem logical to apply rate-limiting directly to protect fragile legacy systems, this approach usually falls short. The main issue with rate-limiting at the legacy application layer is that it’s reactive. By the time rate-limiting kicks in, the system might already be overloaded. Legacy systems, lacking the scalability and elasticity of microservices, are more prone to total failure under high load, and rate-limiting at the application level can’t stop this once the traffic surge has already reached its peak. ... Rate-limiting should be handled further upstream rather than deep in the application layer, where it either conflicts with scalability (in microservices) or arrives too late to prevent failures. This leads us to the API gateway, the strategic point in the architecture where traffic control is most effective. 


Survey Surprise: Quantum Now in Action at Almost One-Third of Sites

The use cases for quantum — scientific research, complex simulations — have been documented for a number of years. However, with the arrival of artificial intelligence, particularly generative AI, on the scene, quantum technology may start finding more mainstream business use cases. In a separate report out of Sogeti (a division of Capgemini Group), Akhterul Mustafa calls an impending mashup of generative AI and quantum computing as the “tech world’s version of a dream team, not just changing the game but also pushing the boundaries of what we thought was possible.” ... The convergence of generative AI and quantum computing beings “some pretty epic perks,” Mustafa states. For example, it enables the supercharging of AI models. “Training AI models is a beastly task that needs tons of computing power. Enter quantum computers, which can zip through complex calculations, potentially making AI smarter and faster.” In addition, “quantum computers can sift through massive datasets in a blink. Pair that with generative AI’s knack for cooking up innovative solutions, and you’ve got a recipe for solving brain-bending problems in areas like health, environment, and beyond.”


How Continuous Threat Exposure Management (CTEM) Helps Your Business

A CTEM framework typically includes five phases: identification, prioritization, mitigation, validation, and reporting and improvement. In the first phase, systems are continuously monitored to identify new or emerging vulnerabilities and potential attack vectors. This continuous monitoring is essential to the vulnerability management lifecycle. Identified vulnerabilities are then assessed based on their potential impact on critical assets and business operations. In the mitigation phase, action is taken to defend against high-risk vulnerabilities by applying patches, reconfiguring systems or adjusting security controls. The validation stage focuses on testing defenses to ensure vulnerabilities are properly mitigated and the security posture remains strong. In the final phase of reporting and improvement, IT leaders gain access to security metrics and improved defense routes, based on lessons learned from incident response. ... While both CTEM and vulnerability management aim to identify and remediate security weaknesses, they differ in scope and execution. Vulnerability management is more about targeted and periodic identification of vulnerabilities within an organization based on a set scan window.


DevOps in the Cloud: Leveraging Cloud Services for Optimal DevOps Practices

A well-designed DevOps transformation strategy can help organizations deliver software products and their services quickly and reliably while improving the overall efficiency of their development and delivery processes. ... Cloud platforms facilitate the immediate provisioning of infrastructure components, including servers, storage units, and databases. This helps teams swiftly initiate new development and testing environments, hastening the software development lifecycle. Companies can see a significant decrease in infrastructure provisioning time by integrating cloud services. ... DevOps helps development and operations teams work together. Cloud platforms provide a central place for storing code, configurations, and important files so everyone can be on the same page. Additionally, cloud-based communication and collaboration tools streamline communication and break down silos between teams. ... Cloud services provide a pay-as-you-go system, so there is no need for a large upfront investment in hardware. This way, companies can scale their infrastructure according to their requirements, saving a lot of money. 


Reinforcement learning algorithm provides an efficient way to train more reliable AI agents

To boost the reliability of reinforcement learning models for complex tasks with variability, MIT researchers have introduced a more efficient algorithm for training them. The findings are published on the arXiv preprint server. The algorithm strategically selects the best tasks for training an AI agent so it can effectively perform all tasks in a collection of related tasks. In the case of traffic signal control, each task could be one intersection in a task space that includes all intersections in the city. By focusing on a smaller number of intersections that contribute the most to the algorithm's overall effectiveness, this method maximizes performance while keeping the training cost low. The researchers found that their technique was between five and 50 times more efficient than standard approaches on an array of simulated tasks. This gain in efficiency helps the algorithm learn a better solution in a faster manner, ultimately improving the performance of the AI agent. "We were able to see incredible performance improvements, with a very simple algorithm, by thinking outside the box. An algorithm that is not very complicated stands a better chance of being adopted by the community because it is easier to implement and easier for others to understand,"



Quote for the day:

"Too many of us are not living our dreams because we are living our fears." -- Les Brown

Daily Tech Digest - November 22, 2024

AI agents are coming to work — here’s what businesses need to know

Defining exactly what an agent is can be tricky, however: LLM-based agents are an emerging technology, and there’s a level of variance in the sophistication of tools labelled as “agents,” as well as how related terms are applied by vendors and media. And as with the first wave of generative AI (genAI) tools, there are question marks around how businesses will use the technology. ... With so many tools in development or coming to the market, there’s a certain amount of confusion among businesses that are struggling to keep pace. “The vendors are announcing all of these different agents, and you can imagine what it’s like for the buyers: instead of ‘The Russians are coming, the Russians are coming,’ it’s ‘the agents are coming, the agents are coming,’” said Loomis. “They’re being bombarded by all of these new offerings, all of this new terminology, and all of these promises of productivity.” Software vendors also offer varying interpretations of the term “agent” at this stage, and tools coming to market exhibit a broad spectrum of complexity and autonomy. ... Many of the agent builder tools coming to business and work apps require little or no expertise. This accessibility means a wide range of workers could manage and coordinate their own agents.


The limits of AI-based deepfake detection

In terms of inference-based detection, ground truth is never known and assumed as such, so detection is based on a one to ninety-nine percentage that the content in question is or is not likely manipulated. Inference-based platform needs no buy-in from platforms, but instead needs robust models trained on a wide variety of deepfaking techniques and technologies in various use cases and circumstances. To stay ahead of emerging threat vectors and groundbreaking new models, those making an inference-based solution can look to emerging gen AI research to implement such methods into detection models as or before such research becomes productized. ... Greater public awareness and education will always be of immense importance, especially in places where content is consumed that could potentially be deepfaked or artificially manipulated. Yet deepfakes are getting so convincing, so realistic that even storied researchers now have a hard time differentiating real from fake simply by looking at or listening to a media file. This is how advanced deepfakes have become, and they will only continue to grow in believability and realism. This is why it is crucial to implement deepfake detection solutions in the aforementioned content platforms or anywhere deepfakes can and do exist. 


Quantum error correction research yields unexpected quantum gravity insights

So far, scientists have not found a general way of differentiating trivial and non-trivial AQEC codes. However, this blurry boundary motivated Liu, Daniel Gottesman of the University of Maryland, US; Jinmin Yi of Canada’s Perimeter Institute for Theoretical Physics; and Weicheng Ye at the University of British Columbia, Canada, to develop a framework for doing so. To this end, the team established a crucial parameter called subsystem variance. This parameter describes the fluctuation of subsystems of states within the code space, and, as the team discovered, links the effectiveness of AQEC codes to a property known as quantum circuit complexity. ... The researchers also discovered that their new AQEC theory carries implications beyond quantum computing. Notably, they found that the dividing line between trivial and non-trivial AQEC codes also arises as a universal “threshold” in other physical scenarios – suggesting that this boundary is not arbitrary but rooted in elementary laws of nature. One such scenario is the study of topological order in condensed matter physics. Topologically ordered systems are described by entanglement conditions and their associated code properties. 


Towards greener data centers: A map for tech leaders

The transformation towards sustainability can be complex, involving key decisions about data center infrastructure. Staying on-premises offers control over infrastructure and data but poses questions about energy sourcing. Shifting to hybrid or cloud models can leverage the innovations and efficiencies of hyperscalers, particularly regarding power management and green energy procurement. One of the most significant architectural advancements in this context is hyperconverged infrastructure (HCI). As we know, traditionally data centers operate using a three-tier architecture comprising separate servers, storage, and network equipment. This model, though reliable, has clear limitations in terms of energy consumption and cooling efficiency. By merging the server and storage layers, HCI reduces both the power demands and the associated cooling requirements. ... The drive to create more efficient and environmentally conscious data centers is not just about cost control; it’s also about meeting the expectations of regulators, customers, and stakeholders. As AI and other compute-intensive technologies continue to proliferate, organizations must reassess their infrastructure strategies, not just to meet sustainability goals but to remain competitive.


What is a data architect? Skills, salaries, and how to become a data framework master

The data architect and data engineer roles are closely related. In some ways, the data architect is an advanced data engineer. Data architects and data engineers work together to visualize and build the enterprise data management framework. The data architect is responsible to visualize the blueprint of the complete framework that data engineers then build. ... Data architect is an evolving role and there’s no industry-standard certification or training program for data architects. Typically, data architects learn on the job as data engineers, data scientists, or solutions architects, and work their way to data architect with years of experience in data design, data management, and data storage work. ... Data architects must have the ability to design comprehensive data models that reflect complex business scenarios. They must be proficient in conceptual, logical, and physical model creation. This is the core skill of the data architect and the most requested skill in data architect job descriptions. This often includes SQL development and database administration. ... With regulations continuing to evolve, data architects must ensure their organization’s data management practices meet stringent legal and ethical standards. They need skills to create frameworks that maintain data quality, security, and privacy.


AI – Implementing the Right Technology for the Right Use Case

Right now, we very much see AI in this “peak of inflated expectations” phase and predict that it will dip into the “trough of disillusionment”, where organizations realize that it is not the silver bullet they thought it would be. In fact, there are already signs of cynicism as decision-makers are bombarded with marketing messages from vendors and struggle to discern what is a genuine use case and what is not relevant for their organization. This is a theme that also emerged as cybersecurity automation matured – the need to identify the right use case for the technology, rather than try to apply it across the board.. ... That said, AI is and will continue to be a useful tool. In today’s economic climate, as businesses adapt to a new normal of continuous change, AI—alongside automation—can be a scale function for cybersecurity teams, enabling them to pivot and scale to defend against evermore diverse attacks. In fact, our recent survey of 750 cybersecurity professionals found that 58% of organizations are already using AI in cybersecurity to some extent. However, we do anticipate that AI in cybersecurity will pass through the same adoption cycle and challenges experienced by “the cloud” and automation, including trust and technical deployment issues, before it becomes truly productive. 


A GRC framework for securing generative AI

Understanding the three broad categories of AI applications is just the beginning. To effectively manage risk and governance, further classification is essential. By evaluating key characteristics such as the provider, hosting location, data flow, model type, and specificity, enterprises can build a more nuanced approach to securing AI interactions. A crucial factor in this deeper classification is the provider of the AI model. ... As AI technology advances, it brings both transformative opportunities and unprecedented risks. For enterprises, the challenge is no longer whether to adopt AI, but how to govern AI responsibly, balancing innovation against security, privacy, and regulatory compliance. By systematically categorizing generative AI applications—evaluating the provider, hosting environment, data flow, and industry specificity—organizations can build a tailored governance framework that strengthens their defenses against AI-related vulnerabilities. This structured approach enables enterprises to anticipate risks, enforce robust access controls, protect sensitive data, and maintain regulatory compliance across global jurisdictions. The future of enterprise AI is about more than just deploying the latest models; it’s about embedding AI governance deeply into the fabric of the organization.


Business Continuity Depends on the Intersection of Security and Resilience

The focus of security, or the goal of security, or the intended purpose of security in its most natural and traditional form, right before we start to apply it to other things, is to prevent bad things from happening, or protect the organization or protect assets. It doesn't necessarily have to be technology that does it. This is where your policies and procedures come into place. Letting users know what acceptable use policies are or what things are accepted when leveraging corporate resources. From a technology perspective, it's your firewalls, antivirus, intrusion detection systems and things of that nature. So, this is where we focus on good cyber hygiene. We're controlling the controllables and making sure that we're taking care of the things that are within our control. What about resilience? This one is near and dear to my heart. That's because I've been in tech and security for almost 25 years, and I've kind of gone through this evolution of what I think is important. We're trained as practitioners in this industry to believe that the goal is to reduce risk. We must reduce or mitigate cyber risk, or we can make other risk decisions. We can avoid it, we can accept it, or we can transfer it. But practically speaking, when we show up to work every day and we're doing something active, we're reducing risk.


How to stop data mesh turning into a data mess

Realistically, expecting employees to remember to follow data quality and compliance guidelines is neither fair nor enforceable. Adherence must be implemented without frustrating users, and become an integral part of the project delivery process. Unlikely as this sounds, a computational governance platform can impose the necessary standards as ‘guardrails’ while also accelerating the time to market of products. Sitting above an organisation’s existing range of data enablement and management tools, a computational governance platform ensures every project follows pre-determined policies, for quality, compliance, security, and architecture. Highly customisable standards can be set at global or local levels, whatever is required. ... While this might seem restrictive, there are many benefits from having a standardised way of working. To streamline processes, intelligent automated templates help data practitioners quickly initiate new projects and search for relevant data. The platform can oversee the deployment of data products by checking their compliance and taking care of the resource provisioning, freeing the teams from the burden of coping with infrastructure technicalities (on cloud or on-prem) and certifying data product compliance at the same time, before data products enter production. 


The SEC Fines Four SolarWinds Breach Victims

Companies should ensure the cyber and data security information they share within their organizations is consistent with what they share with government agencies, shareholders and the public, according to Buchanan Ingersoll & Rooney’s Sanger. This applies to their security posture prior to a breach, as well as their responses afterward. “Consistent messaging is difficult to manage given that dozens, hundreds or thousands could be responsible for an organization’s cybersecurity. Investigators will always be able to find a dissenting or more pessimistic outlook among the voices involved,” says Sanger. “If there is a credible argument that circumstances are or were worse than what the organization shares publicly, leadership should openly acknowledge it and take steps to justify the official perspective.” Corporate cybersecurity breach reporting is still relatively uncharted territory, however. “Even business leaders who intend to act with complete transparency can make inadvertent mistakes or communicate poorly, particularly because the language used to discuss cybersecurity is still developing and differs between communities,” says Sanger. “It’s noteworthy that the SEC framed each penalized company as having, ‘negligently minimized its cybersecurity incident in its public disclosures.’ 



Quote for the day:

"Perfection is not attainable, but if we chase perfection we can catch excellence." -- Vince Lombardi

Daily Tech Digest - November 21, 2024

Building Resilient Cloud Architectures for Post-Disaster IT Recovery

A resilient cloud architecture is designed to maintain functionality and service quality during disruptive events. These architectures ensure that critical business applications remain accessible, data remains secure, and recovery times are minimized, allowing organizations to maintain operations even under adverse conditions. To achieve resilience, cloud architectures must be built with redundancy, reliability, and scalability in mind. This involves a combination of technologies, strategies, and architectural patterns that, when applied collect ... Cloud-based DRaaS solutions allow organizations to recover critical workloads quickly by replicating environments in a secondary cloud region. This ensures that essential services can be restored promptly in the event of a disruption. Automated backups, on the other hand, ensure that all extracted data is continually saved and stored in a secure environment. Using regular snapshots can also provide rapid restoration points, giving teams the ability to revert systems to a pre-disaster state efficiently. ... Infrastructure as code (IaC) allows for the automated setup and configuration of cloud resources, providing a faster recovery process after an incident. 


Agile Security Sprints: Baking Security into the SDLC

Making agile security sprints effective requires organizations to embrace security as a continuous, collaborative effort. The first step? Integrating security tasks into the product backlog right alongside functional requirements. This approach ensures that security considerations are tackled within the same sprint, allowing teams to address potential vulnerabilities as they arise — not after the fact when they're harder and more expensive to fix. ... By addressing security iteratively, teams can continuously improve their security posture, reducing the risk of vulnerabilities becoming unmanageable. Catching security issues early in the development lifecycle minimizes delays, enabling faster, more secure releases, which is critical in a competitive development landscape. The emphasis on collaboration between development and security teams breaks down silos, fostering a culture of shared responsibility and enhancing the overall security-consciousness of the organization. Quickly addressing security issues is often far more cost-effective than dealing with them post-deployment, making agile security sprints a necessary choice for organizations looking to balance speed with security.


The new paradigm: Architecting the data stack for AI agents

With the semantic layer and historical data-based reinforcement loop in place, organizations can power strong agentic AI systems. However, it’s important to note that building a data stack this way does not mean downplaying the usual best practices. This essentially means that the platform being used should ingest and process data in real-time from all major sources, have systems in place for ensuring the quality/richness of the data and then have robust access, governance and security policies in place to ensure responsible agent use. “Governance, access control, and data quality actually become more important in the age of AI agents. The tools to determine what services have access to what data become the method for ensuring that AI systems behave in compliance with the rules of data privacy. Data quality, meanwhile, determines how well an agent can perform a task,” Naveen Rao, VP of AI at Databricks, told VentureBeat. ... “No agent, no matter how high the quality or impressive the results, should see the light of day if the developers don’t have confidence that only the right people can access the right information/AI capability. This is why we started with the governance layer with Unity Catalog and have built our AI stack on top of that,” Rao emphasized.


Enhancing visibility for better security in multi-cloud and hybrid environments

The number one challenge for infrastructure and cloud security teams is visibility into their overall risk–especially in complex environments like cloud, hybrid cloud, containers, and Kubernetes. Kubernetes is now the tool of choice for orchestrating and running microservices in containers, but it has also been one of the last areas to catch speed from a security perspective, leaving many security teams feeling caught on their heels. This is true even if they have deployed admission control or have other container security measures in place. Teams need a security tool in place that can show them who is accessing their workloads and what is happening in them at any given moment, as these environments have an ephemeral nature to them. A lot of legacy tooling just has not kept up with this demand. The best visibility is achieved with tooling that allows for real-time visibility and real-time detection, not point-in-time snapshotting, which does not keep up with the ever-changing nature of modern cloud environments. To achieve better visibility in the cloud, automate security monitoring and alerting to reduce manual effort and ensure comprehensive coverage. Centralize security data using dashboards or log aggregation tools to consolidate insights from across your cloud platforms.


How Augmented Reality is Shaping EV Development and Design

Traditionally, prototyping has been a costly and time-consuming stage in vehicle development, often requiring multiple physical models and extensive trial and error. AR is disrupting this process by enabling engineers to create and test virtual prototypes before building physical ones. Through immersive visualizations, teams can virtually assess design aspects like fit, function, and aesthetics, streamlining modifications and significantly shortening development cycles. ... One of the key shifts in EV manufacturing is the emphasis on consumer-centric design. EV buyers today expect not just efficiency but also vehicles that reflect their lifestyle choices, from customizable interiors to cutting-edge tech features. AR offers manufacturers a way to directly engage consumers in the design process, offering a virtual showroom experience that enhances the customization journey. ... AR-assisted training is one frontier seeing a lot of adoption. By removing humans from dangerous scenarios while still allowing them to interact with those same scenarios, companies can increase safety while still offering practical training. In one example from Volvo, augmented reality is allowing first responders to assess damage on EV vehicles and proceed with caution.


Digital twins: The key to unlocking end-to-end supply chain growth

Digital twins can be used to model the interaction between physical and digital processes all along the supply chain—from product ideation and manufacturing to warehousing and distribution, from in-store or online purchases to shipping and returns. Thus, digital twins paint a clear picture of an optimal end-to-end supply chain process. What’s more, paired with today’s advances in predictive AI, digital twins can become both predictive and prescriptive. They can predict future scenarios to suggest areas for improvement or growth, ultimately leading to a self-monitoring and self-healing supply chain. In other words, digital twins empower the switch from heuristic-based supply chain management to dynamic and granular optimization, providing a 360-degree view of value and performance leakage. To understand how a self-healing supply chain might work in practice, let’s look at one example: using digital twins, a retailer sets dynamic SKU-level safety stock targets for each fulfillment center that dynamically evolve with localized and seasonal demand patterns. Moreover, this granular optimization is applied not just to inventory management but also to every part of the end-to-end supply chain—from procurement and product design to manufacturing and demand forecasting. 


Illegal Crypto Mining: How Businesses Can Prevent Themselves From Being ‘Cryptojacked’

Business leaders might believe that illegal crypto mining programs pose no risks to their operations. Considering the number of resources most businesses dedicate to cybersecurity, it might seem like a low priority in comparison to other risks. However, the successful deployment of malicious crypto mining software can lead to even more risks for businesses, putting their cybersecurity posture in jeopardy. Malware and other forms of malicious software can drain computing resources, cutting the life expectancy of computer hardware. This can decrease the long-term performance and productivity of all infected computers and devices. Additionally, the large amount of energy required to support the high computing power of crypto mining can drain electricity across the organization. But one of the most severe risks associated with malicious crypto mining software is that it can include other code that exploits existing vulnerabilities. ... While powerful cybersecurity tools are certainly important, there’s no single solution to combat illegal crypto mining. But there are different strategies that business leaders can implement to reduce the likelihood of a breach, and mitigating human error is among the most important. 


10 Most Impactful PAM Use Cases for Enhancing Organizational Security

Security extends beyond internal employees as collaborations with third parties also introduce vulnerabilities. PAM solutions allow you to provide vendors with time-limited, task-specific access to your systems and monitor their activity in real time. With PAM, you can also promptly revoke third-party access when a project is completed, ensuring no dormant accounts remain unattended. Suppose you engage third-party administrators to manage your database. In this case, PAM enables you to restrict their access based on a "need-to-know" basis, track their activities within your systems, and automatically remove their access once they complete the job. ... Reused or weak passwords are easy targets for attackers. Relying on manual password management adds another layer of risk, as it is both tedious and prone to human error. That's where PAM solutions with password management capabilities can make a difference. Such solutions can help you secure passwords throughout their entire lifecycle — from creation and storage to automatic rotation. By handling credentials with such PAM solutions and setting permissions according to user roles, you can make sure all the passwords are accessible only to authorized users. 


The Information Value Chain as a Framework for Tackling Disinformation

The information value chain has three stages: production, distribution, and consumption. Claire Wardle proposed an early version of this framework in 2017. Since then, scholars have suggested tackling disinformation through an economics lens. Using this approach, we can understand production as supply, consumption as demand, and distribution as a marketplace. In so doing, we can single out key stakeholders at each stage and determine how best to engage them to combat disinformation. By seeing disinformation as a commodity, we can better identify and address the underlying motivations ... When it comes to the disinformation marketplace, disinformation experts mostly agree it is appropriate to point the finger at Big Tech. Profit-driven social media platforms have understood for years that our attention is the ultimate gold mine and that inflammatory content is what attracts the most attention. There is, therefore, a direct correlation between how much disinformation circulates on a platform and how much money it makes from advertising. ... To tackle disinformation, we must think like economists, not just like fact-checkers, technologists, or investigators. We must understand the disinformation value chain and identify the actors and their incentives, obstacles, and motivations at each stage.


Why do developers love clean code but hate writing documentation?

In fast-paced development environments, particularly those adopting Agile methodologies, maintaining up-to-date documentation can be challenging. Developers often deprioritize documentation due to tight deadlines and a focus on delivering working code. This leads to informal, hard-to-understand documentation that quickly becomes outdated as the software evolves. Another significant issue is that documentation is frequently viewed as unnecessary overhead. Developers may believe that code should be self-explanatory or that documentation slows down the development process. ... To prevent documentation from becoming a second-class citizen in the software development lifecycle, Ferri-Beneditti argues that documentation needs to be observable, something that can be measured against the KPIs and goals developers and their managers often use when delivering projects. ... By offloading the burden of documentation creation onto AI, developers are free to stay in their flow state, focusing on the tasks they enjoy—building and problem-solving—while still ensuring that the documentation remains comprehensive and up-to-date. Perhaps most importantly, this synergy between GenAI and human developers does not remove human oversight. 



Quote for the day:

"The harder you work for something, the greater you'll feel when you achieve it." -- Unknown