Showing posts with label MDM. Show all posts
Showing posts with label MDM. Show all posts

Daily Tech Digest - August 19, 2025


Quote for the day:

“A great person attracts great people and knows how to hold them together. “ -- Johann Wolfgang von Goethe



What happens when penetration testing goes virtual and gets an AI coach

Researchers from the University of Bari Aldo Moro propose using Cyber Digital Twins (CDTs) and generative AI to create realistic, interactive environments for cybersecurity education. Their framework simulates IT, OT, and IoT systems in a controlled virtual space and layers AI-driven feedback on top. The goal is to improve penetration testing skills and strengthen understanding of the full cyberattack lifecycle. At the center of the framework is the Red Team Knife (RTK), a toolkit that integrates common penetration testing tools like Nmap, theHarvester, sqlmap, and others. What makes RTK different is how it walks learners through the stages of the Cyber Kill Chain model. It prompts users to reflect on next steps, reevaluate earlier findings, and build a deeper understanding of how different phases connect. ... This setup reflects the non-linear nature of real-world penetration testing. Learners might start with a network scan, move on to exploitation, then loop back to refine reconnaissance based on new insights. RTK helps users navigate this process with suggestions that adapt to each situation. The research also connects this training approach to a broader concept called Cyber Social Security, which focuses on the intersection of human behavior, social factors, and cybersecurity. 


7 signs it’s time for a managed security service provider

When your SOC team is ignoring 300 daily alerts and manually triaging what should be automated, that’s your cue to consider an MSSP, says Toby Basalla, founder and principal data consultant at data consulting firm Synthelize. When confusion reigns, who in the SOC team knows which red flag actually means something? Plus, if you’re depending on one person to monitor traffic during off-hours, and that individual is out sick, what happens then? ... Organizations typically realize they need an MSSP when their internal team struggles to keep pace with alerts, incident response, or compliance requirements, says Ensar Seker, CISO at SOCRadar, where he specializes in threat intelligence, ransomware mitigation, and supply chain security. This vulnerability becomes particularly evident after a close call or audit finding, when gaps in visibility, threat detection, or 24/7 coverage become undeniable. ... Many smaller enterprises simply can’t afford the cost of a full-time cybersecurity staff, or even a single dedicated expert. This leaves such organizations particularly vulnerable to all types of attacks. An MSSP can significantly help such organizations by providing a full array of services, including 24/7 monitoring, threat detection, incident response, and access to a broad range of specialized security tools and expertise. “They bring economies of scale, proactive threat intelligence, and a deep understanding of best practices,” Young says.


Cyber Security Responsibilities of Roles Involved in Software Development

Building secure software is crucial as a vulnerable software would be an easy target for the cyber criminals to exploit. There are people, process and technology forming part of the software supply chain and it is very important that all of these plays a role in securing the supply chain. While process and technology play the role of enablers, it is people who should buy-in and adapt to the mindset of ensuring security in every aspect of their routine work. ... This includes developers implementing secure coding techniques, security teams identifying vulnerabilities, and everyone involved staying updated on the latest threats and best practices to prevent potential security breaches. Whatever said and done, the root cause of a vulnerability in a software ultimately boils down to people, because someone somewhere had missed something and thus a security defect creeps in to the supply chain and shows up as a vulnerability. It could be a missed requirement by the Business Analyst or a simple coding mistake by a developer. So, everyone involved in the software development right from gathering requirements to deployment of the software in production environment need to have the sense of cyber security in what they do. Even those involved in support and maintenance of software systems also has a role in keeping the software secure.


Build Boringly Reliable ai Into Your DevOps

Observability for ai is different because “correctness” isn’t binary and inputs are messy. We focus on three pillars: live service metrics, evaluation metrics (task success, hallucination rate), and lineage. The first pillar looks like any microservice: we scrape metrics and trace request/response cycles. We prefer OpenTelemetry for traces because we can tag spans with prompt IDs, model routes, and experiment flags. The benefit is obvious when a perf spike happens and you can isolate it to “experiment=prompt_v17.” ... Costs don’t explode; they creep—one verbose chain-of-thought at a time. We price every inference the same way we price a SQL query: tokens in, tokens out, latency, and downstream work. For a customer-support deflection bot, we discovered that truncating history to the last 6 messages cut average tokens by 41% with no measurable drop in solved-rate over 30 days. That was an easy win. Harder wins come from selective routing: ship easy tasks to a small, fast model; escalate only when confidence is low. ... Data quality makes or breaks ai results. Before we debate model choices, we sanitize inputs, enforce schemas, and redact PII. You don’t want a customer’s credit card to become part of your “context.” We’ve had great results with a lightweight validation layer in the request path and daily batch checks on the source corpora. 


Why Training Won’t Solve the Citizen Developer Security Problem

In most organizations, security training is a core component of cybersecurity frameworks and often a compliance requirement. Helping employees recognize and respond to cyber threats significantly reduces human error, the leading cause of security breaches. That said, traditional security training for technically inclined IT staff and developer teams is already a formidable challenge. Rolling out training for citizen developers—employees with little to no formal IT or security background— is exponentially harder for several reasons ... It’s a well-known fact: security training has always struggled to deliver lasting behavioral change. For two decades, employees have been told, “Don’t click suspicious links in emails.” Yet, click rates on phishing emails remain stubbornly high. Why? Human error is persistent, so training alone is not enough. In response, businesses are layering technology — advanced email gateways, sandboxing, Endpoint Detection and Response (EDR), and real-time URL scanning — around users to compensate for their inevitable lapses in judgment. ... Unfortunately, traditional AppSec tools fall short for no-code apps, which aren’t built line by line and rely on proprietary logic inaccessible to standard code scans. Even with access, interpreting their risks demands specialized cybersecurity expertise, rendering traditional code-scanning tools ineffective.


6 signs of a dying digital transformation

“It’s a fundamental disconnect where the technology being implemented simply isn’t delivering the promised improvements to operations, customer experience, or competitive advantage.” This indicator, he notes, often reveals itself as a growing cynicism within the organization, with teams feeling like they’re simply “doing digital” for its own sake without a clear understanding of the “why” or seeing any real positive impact. ... When users aren’t interested or feel no need to use the transformation’s new tools or applications, it indicates a disconnect between the users, their goals, and actual business outcomes, says Aparna Achanta, IBM Consulting’s cybersecurity strategist and AI governance and transformation leader. To successfully address this issue, Achanta recommends aligning digital transformation with the overall business vision, making sure that the voices of end-users and customers are being heard. ... Strong business leadership, and a willingness to admit mistakes, are essential to digital transformation success, Hochman says. “Too often, enterprises run away from failure.” He notes that such moments are actually golden opportunities to break paradigms and try new approaches. “The more failures a company speaks openly about, the more innovation occurs.” ... “Adoption is the oxygen of transformation,” he says. 


Why Master Data Management Is Even More Important Now

There is a mindset shift that must happen to get people to buy into the cost and the overhead of managing the data in a way that's going to be usable, Thompson says. “It’s knowing how to match technology up with a set of business processes, internal culture, commitment to do things properly and tie [that] to a business outcome that makes sense,” he says. “[T]he level of maturity of some good companies is bad. They’re just bad at managing their data assets.” ... “[MDM] has very real business consequences, and I think that's the part that we can all do better is to start talking about the business outcome, because these business outcomes are so serious and so easy to understand that it shouldn't be hard to get business leaders behind it,” says Thompson. “But if you try to get business leaders behind MDM, it sounds like you want to undertake a science project with their help. It’s not about the MDM, it’s about the business outcome that you can get if you do a great job at MDM.” ... In older organizations, MDM maturity tends to be unevenly distributed. The core data tends to be fairly well organized and managed, but the rest isn’t. The age-old problem of data ownership and a reticence to share data doesn’t help. “The notion of data mesh [is] I’ll manage this piece, and you manage that piece. We’ll be disconnected but we can connect, and you can use it, but don’t mess with it. It’s mine,” says Landry.


How to Future-Proof Your Data and AI Strategy

The earlier you find a software bug, the less expensive it is to fix and the less negative customer impact it has – this is a basic principle of software development. And the value of a shift-left approach becomes even more apparent when applied to data privacy in the age of AI. If you use personal information to train models and realize later that you shouldn’t have, the only solution is to roll back the model, which also rolls back the value of the system and the competitive advantage it was intended to deliver. ... Companies need a scalable approach to determine where to go deep and where to move quickly. Prioritize based on impact by applying stricter controls where AI is high-risk or high-stakes, such as projects where AI is core to the functionality of new solutions or segments of the business. Apply lighter-touch governance where risk is low and build scalable policies that align governance intensity with business context, risk appetite, and innovation goals. ... Future-proofing your data and AI strategy is more than having the right tools and processes; it’s a mindset. If your approach isn’t designed for scalability and agility, it can quickly become a source of friction. A rigid, compliance-focused model makes even the best tools feel ineffective and can result in governance being seen as a bottleneck rather than a value driver.


The Unavoidable ‘SCREAM’: Why Enterprise Architecture Must Transform for the Organization of Tomorrow

In an era where every discussion, whether personal or organizational, is steeped in the pervasive influence of AI and data, one naturally questions the true state of Enterprise Architecture (EA) within most organizations today. Too often, we observe situational chaos and a predominantly reactive posture, where EA teams find themselves supporting hasty executive decisions in a culture of order-taking. Businesses, in turn, perceive Information Technology as slow to deliver, while IT teams, grappling with a perceived lack of business understanding, struggle to demonstrate timely value. This dynamic often leads to organizations becoming vendor-driven, with core architectural management often unaddressed. Despite this, there’s no doubt that the demand for Enterprise Architecture is surging. However, the existing challenges—from the sheer breadth of required skillsets and knowledge to the overwhelming abundance of frameworks to choose from—frequently plunge EA practices into moments of SCREAM: Situational Chaotic Realities of Enterprise Architecture Management. However, among these challenges, there persists a profound desire for adaptive design and resilient enterprise architecture. Significant architectural efforts are indeed undertaken across organizations of all sizes. The equilibrium that every organization truly needs, however, often feels elusive.


Microsoft Morphs Fusion Developers To Full Stack Builders

Citizen development is a thorny subject; allowing business “laypersons” to impact the way software application code is structured, aligned and executed is an unpopular concept with command line purists who would prefer to keep the suits at arm’s length, if not further. ... The central argument from Silver and Cunningham is that it’s really tough to teach businesspeople to code and, equally tough to teach software engineers the principles of business operations. The Redmond pair suggest that Microsoft Power Platform will provide the “scaffolding” for full-stack teams to fuse (yes, okay, we’re not using that word anymore) their two previously quite separate working environments. ... To make full-stack development a reality inside any given organization, Microsoft has said that there will need to be a degree of initial investment into engineering systems and context. This, then, would be the scaffolding. Redmond suggests that new applications will emerge that are architected to support natural language development, augmentation and modification. With boundaries, safeguards and guardrails in place to oversee what AI agents can do when left in the hands of businesspeople, software systems will need to be engineered with enough meta-knowledge to understand the business context of the decisions that might be made without breaking other parts of the system. 

Daily Tech Digest - July 12, 2025


Quote for the day:

"If you do what you’ve always done, you’ll get what you’ve always gotten." -- Tony Robbins


Why the Value of CVE Mitigation Outweighs the Costs

When it comes to CVEs and continuous monitoring, meeting compliance requirements can be daunting and confusing. Compliance isn’t just achieved; rather, it is a continuous maintenance process. Compliance frameworks might require additional standards, such as Federal Information Processing Standards (FIPS), Federal Risk and Authorization Management Program (FedRAMP), Security Technical Implementation Guides (STIGs) and more that add an extra layer of complexity and time spent. The findings are clear. Telecommunications and infrastructure companies reported an average of $3 million in new revenue annually by improving their container security enough to qualify for security-sensitive contracts. Healthcare organizations averaged $7.3 million in new revenue, often driven by unlocking expansion into compliance-heavy markets. ... The industry has long championed “shifting security left,” or embedding checks earlier in the pipeline to ensure security measures are incorporated throughout the entire software development life cycle. However, as CVE fatigue worsens, many teams are realizing they need to “start left.” That means: Using hardened, minimal container images by default; Automating CVE triage and patching through reproducible builds; Investing in secure-by-default infrastructure that makes vulnerability management invisible to most developers


Generative AI: A Self-Study Roadmap

Building generative AI applications requires comfort with Python programming and basic machine learning concepts, but you don't need deep expertise in neural network architecture or advanced mathematics. Most generative AI work happens at the application layer, using APIs and frameworks rather than implementing algorithms from scratch. ... Modern generative AI development centers around foundation models accessed through APIs. This API-first approach offers several advantages: you get access to cutting-edge capabilities without managing infrastructure, you can experiment with different models quickly, and you can focus on application logic rather than model implementation. ... Generative AI applications require different API design patterns than traditional web services. Streaming responses improve user experience for long-form generation, allowing users to see content as it's generated. Async processing handles variable generation times without blocking other operations. ... While foundation models provide impressive capabilities out of the box, some applications benefit from customization to specific domains or tasks. Consider fine-tuning when you have high-quality, domain-specific data that foundation models don't handle well—specialized technical writing, industry-specific terminology, or unique output formats requiring consistent structure.


Announcing GenAI Processors: Build powerful and flexible Gemini applications

At its core, GenAI Processors treat all input and output as asynchronous streams of ProcessorParts (i.e. two-way aka bidirectional streaming). Think of it as standardized data parts (e.g., a chunk of audio, a text transcription, an image frame) flowing through your pipeline along with associated metadata. This stream-based API allows for seamless chaining and composition of different operations, from low-level data manipulation to high-level model calls. ... We anticipate a growing need for proactive LLM applications where responsiveness is critical. Even for non-streaming use cases, processing data as soon as it is available can significantly reduce latency and time to first token (TTFT), which is essential for building a good user experience. While many LLM APIs prioritize synchronous, simplified interfaces, GenAI Processors – by leveraging native Python features – offer a way for writing responsive applications without making code more complex. ... GenAI Processors is currently in its early stages, and we believe it provides a solid foundation for tackling complex workflow and orchestration challenges in AI applications. While the Google GenAI SDK is available in multiple languages, GenAI Processors currently only support Python.


Scaling the 21st-century leadership factory

Identifying priority traits is critical; just as important, CEOs and their leadership teams must engage early and often with high-potential employees and unconventional thinkers in the organization, recognizing that innovation often comes from the edges of the business. Skip-level meetings are a powerful tool for this purpose. Most famously, Apple’s Steve Jobs would gather what he deemed the 100 most influential people at the company, including young engineers, to engage directly in strategy discussions—regardless of hierarchy or seniority. ... A culture of experimentation and learning is essential for leadership development—but it must be actively pursued. “Instillation of personal initiative, aggressiveness, and risk-taking doesn’t spring forward spontaneously,” General Jim Mattis explained in his 2019 book on leadership, Call Sign Chaos. “It must be cultivated for years and inculcated, even rewarded, in an organization’s culture. If the risk-takers are punished, then you will retain in your ranks only the risk averse,” he wrote. ... There are multiple ways to streamline decision-making, including redefining decision rights to focus on a handful of owners and distinguishing between different types of decisions, as not all choices are high stakes. 


Lessons learned from Siemens’ VMware licensing dispute

Siemens threatened to sue VMware if it didn’t provide ongoing support for the software and handed over a list of the software it was using that it wanted support for. Except that the list included software that it didn’t have any licenses for, perpetual or otherwise. Broadcom-owned VMware sued, Siemens countersued, and now the companies are battling over jurisdiction. Siemens wants the case to be heard in Germany, and VMware prefers the United States. Normally, if unlicensed copies of software are discovered during an audit, the customer pays the difference and maybe an additional penalty. After all, there are always minor mistakes. The vendors try to keep these costs at least somewhat reasonable, since at some point, customers will migrate from mission-critical software if the pain is high enough. ... For large companies, it can be hard to pivot quickly. Using open-source software can help reduce the risk of unexpected license changes, and, for many major tools there are third-party service providers that can offer ongoing support. Another option is SaaS software, Ringdahl says, because it does make license management a bit easier, since there’s usually transparency both for the customer and the vendor about how much usage the product is getting.


Microsoft says regulations and environmental issues are cramping its Euro expansion

One of the things that everyone needs to consider is how datacenter development in Europe is being enabled or impeded, Walsh said. "Because we have moratoriums coming at us. We have communities that don't want us there," she claimed, referring particularly to Ireland where local opposition to bit barns has been hardening because of the amount of electricity they consume and their environmental impact. Another area of discussion at the Datacloud keynote was the commercial models for acquiring datacenter capacity, which it was felt had become unfit for the new environment where large amounts are needed quickly. "From our perspective, time to market is essential. We've done a lot of leasing in the last two years, and that is all time for market pressure," Walsh said. "I also manage land acquisition and land development, which includes permitting. So the joy of doing that is that when my permits are late, I can lease so I can actually solve my own problems, which is amazing, but the way things are going, it's going to be very difficult to continue to lease the infrastructure using co-location style funding. It's just getting too big, and it's going to get harder and harder to get up the chain, for sure," she explained. ... "European regulations and planning are very slow, and things take 18 months longer than anywhere else," she told attendees at <>Bisnow's Datacenter Investment Conference and Expo (DICE) in Ireland.


350M Cars, 1B Devices Exposed to 1-Click Bluetooth RCE

The scope of affected systems is massive. The developer, OpenSynergy, proudly boasts on its homepage that Blue SDK — and RapidLaunch SDK, which is built on top of it and therefore also possibly vulnerable — has been shipped in 350 million cars. Those cars come from companies like Mercedes-Benz, Volkswagen, and Skoda, as well as a fourth known but unnamed company. Since Ford integrated Blue SDK into its Android-based in-vehicle infotainment (IVI) systems in November, Dark Reading has reached out to determine whether it too was exposed. ... Like any Bluetooth hack, the one major hurdle in actually exploiting these vulnerabilities is physical proximity. An attacker would likely have to position themselves within around 10 meters of a target device in order to pair with it, and the device would have to comply. Because Blue SDK is merely a framework, different devices might block pairing, limit the number of pairing requests an attacker could attempt, or at least require a click to accept a pairing. This is a point of contention between the researchers and Volkswagen. ... "Usually, in modern cars, an infotainment system can be turned on without activating the ignition. For example, in the Volkswagen ID.4 and Skoda Superb, it's not necessary," he says, though the case may vary vehicle to vehicle. 


Leaders will soon be managing AI agents – these are the skills they'll need, according to experts

An AI agent is essentially just "a piece of code", says Jarah Euston, CEO and Co-Founder of AI-powered labour platform WorkWhile, which connects frontline workers to shifts. "It may not have the same understanding, empathy, awareness of the politics of your organization, of the fears or concerns or ambitions of the people around that it is serving. "So managers have to be aware that the agent is only as good as how you've trained it. I don't think we're close yet to having agents that can operate without any human oversight. "As a manager, you want to leverage the AI to make you and your team more productive, but you constantly have to be checking, iterating and training your tools to get the most out of them."  ... Technological skills are expected to become increasingly vital over the next five years, outpacing the growth of all other skill categories. Leading the way are AI and big data, followed closely by networking, cybersecurity and overall technological literacy. The so-called 'soft skills' of creative thinking and resilience, flexibility and agility are also rising in importance, along with curiosity and lifelong learning. Empathy is one skill AI agents can't learn, says Women in Tech's Moore Aoki, and she believes this will advantage women.


Common Master Data Management (MDM) Pitfalls

In addition to failing to connect MDM’s value with business outcomes, “People start with MDM by jumping in with the technology,” Cooper said. “Then, they try to fit the people, processes, and master data into their selected technology.” Moreover, in the process of prioritizing technology first, organizations take for granted that they have good data quality, data that is clean and fit for purpose. Then, during a major initiative, such as migrating to a cloud environment, they discover their data is not so clean. ... Organizations fall into the pitfalls above and others because they try to do it alone, and most have never done MDM before. Instead, “Organizations have different capabilities with MDM,” said Cooper, “and you don’t know what you don’t know.” ... Connecting the MDM program to business objectives requires talking with the stakeholders across the organization, especially divisions with direct financial risks such as sales, marketing, procurement, and supply. Cooper said readers should learn the goals of each unit and how they measure success in growing revenue, reducing cost, mitigating risk, or operating more efficiently. ... Cooper advised focusing on data quality – e.g., through reference data – rather than technology. In the figure below, a company has data about a client, Emerson Electric, as shown on the left. 


Why Cloud Native Security Is More Complex Than You Think

Enterprise security tooling can help with more than just the monitoring of these vulnerabilities though. And, often older vulnerabilities that have been patched by the software vendor will offer “fix status” advice. This is where a specific package version is shown to the developer or analyst responsible for remediating the vulnerability. When they upgrade the current package to that later version, the vulnerability alert will be resolved. To confuse things further, the applications running in containers or serverless functions also need to be checked for non-compliance. Warnings that may be presented by security tooling when these applications are checked against recognised compliance standards, frameworks or benchmarks for noncompliance are wide and varied. For example, if a serverless function has overly permissive access to another cloud service and an attacker gets access to the serverless function’s code via a vulnerability, the attack’s blast radius could exponentially increase as a result. Or, often compliance checks reveal how containers are run with inappropriate network settings. ... At a high level, these components and importantly, how they interact with each other, is why applications running in the cloud require time, effort and specialist expertise to secure them.

Daily Tech Digest - March 02, 2025


Quote for the day:

"The ability to summon positive emotions during periods of intense stress lies at the heart of effective leadership." -- Jim Loehr


Weak cyber defenses are exposing critical infrastructure — how enterprises can proactively thwart cunning attackers to protect us all

Weak cybersecurity isn’t merely a corporate issue — it’s a national security risk. The 2021 Colonial Pipeline attack disrupted energy supplies and exposed vulnerabilities in critical industries. Rising geopolitical tensions, especially with China, amplify these risks. Recent breaches attributed to state-sponsored actors have exploited outdated telecommunications equipment and other legacy systems, revealing how complacency in updating technology can put national security in danger. For instance, last year’s hack of U.S. and international telecommunications companies exposed phone lines used by top officials and compromised data from systems for surveillance requests, threatening national security. Weak cybersecurity at these companies risks long-term costs, allowing state-sponsored actors to access sensitive information, influence political decisions and disrupt intelligence efforts. ... No company can face today’s cyber threats on its own. Collaboration between private businesses and government agencies is more than helpful — it’s imperative. Sharing threat intelligence in real-time allows organizations to respond faster and stay ahead of emerging risks. Public-private partnerships can also level the playing field by offering smaller companies access to resources like funding and advanced security tools they might not otherwise afford.


Evaluating the CISO

Delegation skills are an essential component that should be evaluated separately in this area. Effective delegation is essential to prevent becoming a bottleneck, as micromanagement is unsuitable for the CISO role. Delegating complex tasks not only lightens your load but also helps foster the team’s overall competence. Without strong delegation skills, CISOs cannot rate themselves highly in their relationship with the internal security team. ... A CISO is hired to lead, manage, and support specific projects or programs such as migrating to a cloud or hybrid infrastructure, implementing zero-trust principles, launching security awareness initiatives, or assessing risks and creating a roadmap for post-quantum cryptography implementation. The success of these initiatives ultimately falls under the CISO’s responsibility. To execute these programs effectively, the CISO relies heavily on its team and internal organizational peers. As such, building strong relationships with both is essential for successfully delivering projects. ... A CISO must have responsibility for the information security budget, which includes funding for the team, tools, and services. Without direct control over the budget, it becomes challenging to rate the relationship with management highly, as budget ownership is a critical aspect of the CISO’s role.


Unraveling Large Language Model Hallucinations

You might have seen model hallucinations. They are the instances where LLMs generate incorrect, misleading, or entirely fabricated information that appears plausible. These hallucinations happen because LLMs do not “know” facts in the way humans do; instead, they predict words based on patterns in their training data. ... Supervised Fine-Tuning makes the model capable. However, even a well-trained model can generate misleading, biased, or unhelpful responses. Therefore, Reinforcement Learning with Human Feedback is required to align it with human expectations. We start with the assistant model, trained by SFT. For a given prompt we generate multiple model outputs. Human labelers rank or score multiple model outputs based on quality, safety, and alignment with human preferences. We use these data to train a whole separate neural network that we call a reward model. The reward model imitates human scores. It is a simulator of human preferences. It is a completely separate neural network, probably with a transformer architecture, but it is not a language model in the sense that it generates diverse language. It’s just a scoring model.


How to Communicate the Business Value of Master Data Management

In an ideal scenario, MDM is integral to a broader D&A strategy, highlighting how D&A supports the organization's strategic goals. The strategy aligns with these goals, prioritizes the business outcomes it will support, and details what is needed to achieve them. Therefore, leaders must first understand and prioritize the explicit business outcomes that MDM will support before creating an MDM strategy. In other words, "improving decision-making" is not good enough. "Increase customer service levels by 5% by end of December 2025" is the level of detail required. D&A leaders may recognize that master data is causing a problem or limiting an opportunity, which is where they would rely on an MDM. If this is the case, those D&A leaders should consider questions that help identify the problem, KPIs, and key stakeholders in these cases. These questions help identify potential business outcomes that MDM could support. Figure 1 provides a worksheet to build this initial picture and facilitate stakeholder discussions. The worksheet maps high-level goals onto a run-grow-transform framework, which could also be represented by three columns for the primary business value drivers: risk, revenue, and cost.


4 ways to get your business ready for the agentic AI revolution

Agents could be used eventually, but only once a partnership approach identifies the right opportunities. "Agents are becoming a big part of how generative AI and machine learning are used in business today. The way agents will be used in travel will be fascinating to watch. I think this technology will certainly be a part of the mix," he said. "The process for Hyatt will be to find the right technologies -- and we'll do that in close partnership with our business leaders and the technology teams that run the applications. We'll then provide the AI services to drive those transitions for the business." ... Keith Woolley, chief digital and information officer at the University of Bristol, is another digital leader who sees the potential benefits of agents. However, he said these advantages will become manifest over the longer term. "We are looking at agentic AI, but we're not implementing it yet," he said. "We sit as a management team and ask questions like, 'Should we do our admissions process using agentic AI? What would be the advantage?'" Woolley told ZDNET he could envision a situation in which AI and automation help assess and inform candidates worldwide about the status of their applications.


Cloud Giants Collaborate on New Kubernetes Resource Management Tool

The core innovation of kro is the introduction of the ResourceGraphDefinition custom resource. kro encapsulates a Kubernetes deployment and its dependencies into a single API, enabling custom end-user interfaces that expose only the parameters applicable to a non-platform engineer. This masking hides the complexity of API endpoints for Kubernetes and cloud providers that are not useful in a deployment context. ... Kro works seamlessly with the existing cloud provider Kubernetes extensions that are available to manage cloud resources from Kubernetes. These are AWS Controllers for Kubernetes (ACK), Google's Config Connector (KCC), and Azure Service Operator (ASO). kro enables standardised, reusable service templates that promote consistency across different projects and environments, with the benefit of being entirely Kubernetes-native. It is still in the early stages of development. "As an early-stage project, kro is not yet ready for production use, but we still encourage you to test it out in your own Kubernetes development environments," the post states. ... Most significantly for the Crossplane community, Farcic questioned kro's purpose given its functional overlap with existing tools. "kro is serving more or less the same function as other tools created a while ago without any compelling improvement," he observed. 


Why a different approach to AIOps is needed for SD-WAN

AIOps tools enhance efficiency by seamlessly integrating with IT management tools, enabling proactive issue identification and streamlining IT management processes. But more than that, they optimize an organization’s network by improving the performance, efficiency, and dependability of its network resources to ensure optimal user experience. Regarding infrastructure, many organizations now rely on SD-WAN – software-defined wide area network – to manage and optimize data traffic across different types of networks efficiently. SD-WAN is an effective way to connect the organization and provide users with application access. It helps businesses improve their network performance, cut costs, and be more flexible by easily connecting to various network types. ... AIOps tools use the information extracted from SD-WAN systems and autonomously resolve issues without human intervention. In other words, AIOps tools utilize predictive analytics to forecast future events or outcomes related to network operations. This makes the whole system run smoother and more reliably, while machine learning algorithms can use this historical data to make predictions and proactively improve the performance of critical applications.


AI-Driven Threat Detection and the Need for Precision

AI algorithms, particularly those based on machine learning, excel at sifting through massive datasets and identifying patterns that would be nearly impossible for us mere humans to spot. An AI system might analyze network traffic patterns to identify unusual data flows that could indicate a data exfiltration attempt. Alternatively, it could scan email attachments for malicious code that traditional antivirus software might miss. Ultimately, AI feeds on context and content. The effectiveness of these systems in protecting your security posture(link is external) is inextricably linked to the quality of the data they are trained on and the precision of their algorithms. ... Finally, AI-driven threat detection may not eradicate human expertise. Skilled security professionals should still oversee AI systems and make informed decisions based on their own contextual expertise and experience. Human oversight validates the AI's findings, and threat detection algorithms may not be able to totally replace the critical thinking and intuition of human analysts. There may come a time when human professionals exist in AI's shadow. Yet, at this time, combining the power of AI with human knowledge and a commitment to continuous learning can form the building blocks for a sophisticated defense program. 


From Ambiguity to Accountability: Analyzing Recommender System Audits under the DSA

In these early years of the DSA, a range of stakeholders – online platforms, civil society, the European Commission (EC), and national Digital Service Coordinators (DSCs) – must experiment, identify good practices, and share lessons learned. Such iteration is important to ensure an adaptive DSA regime that spurs innovation and responds to shifting technologies, risks, and mitigation strategies. The need for iteration and flexibility, however, should not mean the audits fail to deliver on their potential as vehicles for transparency and accountability. The first round of independent audits of recommender systems reveals clear areas for immediate improvement. Because the core definitions and methodologies were developed independently by platforms and auditors, significant inconsistencies exist in both risk assessment and audit processes. ... The DSA requires the main parameters of recommender systems to be spelled out in plain and intelligible language. What does this concretely mean in the recommender system context? Is it free of “acronyms or complex/technical terminology” (Pinterest), “straightforward vocabulary and easy to perceive, understand, or interpret” (Snap), or “written for a general audience with varying technical skill levels, inclusive of all users” (TikTok)? There's a subtle difference in expectations associated with each framing. These terms don’t need to be defined in a vacuum.


Cybersecurity in retail: What does the future hold?

In the coming year, cybersecurity experts predict attackers will increasingly target Generative AI models used by retailers, creating significant potential for operational disruptions and data breaches. These AI systems, now critical to retail operations, are vulnerable to sophisticated attacks that could compromise customer service efficiency and expose critical business vulnerabilities. The core risk lies in the sophisticated ways attackers can exploit AI’s complex decision-making processes, turning what was once a technological advantage into a potential security liability. Retailers must recognise that their AI systems are not just technological tools, but potential entry points for cybercriminal activities. ... The complexity and distribution of digital ecosystems make them prime targets during high-demand periods. For example, as we have seen in the past, cyberattacks that hit supply chains can cause major delays and financial loss. These incidents underscore the vulnerabilities in supply chains during peak times of the year​. In 2025, expect a rise in supply chain attacks during the holiday season, targeting ecommerce platforms and logistics providers, which could disrupt product availability and shipping.

Daily Tech Digest - January 26, 2025


Quote for the day:

“If you don’t try at anything, you can’t fail… it takes back bone to lead the life you want” -- Richard Yates

Here’s Why Physical AI Is Rapidly Gaining Ground And Lauded As The Next AI Big Breakthrough

If we are going to connect generative AI to all kinds of robots and other machines that are wandering around in our homes, offices, factories, streets, and the like, we ought to expect that the AI will do so properly, safely, and with aplomb. Can an AI that only has text-based data training adequately control and direct those real-world machines as they mix among people? Some assert that this is a highly dangerous concern. The generative AI uses ostensibly book learning to guess what will happen when a robot is instructed by the AI to lift a chair or hold aloft a dog. Is that good enough to cope with the myriad of aspects that can go wrong? Perhaps the AI will by text-basis logic assume that if the dog is dropped, it will bounce like a rubber ball. Ouch, the dog might not be amused. ... AI researchers are scurrying to craft Physical AI. The future depends on this capability. Machines and robots are going to be built and shipped to work side-by-side with humans. Physical AI will be the make-or-break of whether those mechanizations are compatible with humans and operate properly in the real world or instead are endangering and harmful.


Why workload repatriation must be part of true multi-cloud strategies

Repatriation can provide benefits such as cost optimization and enhanced control, but it also introduces significant challenges. Key obstacles organizations encounter during cloud repatriation include the absence of cloud-native services, limited access to provider-managed applications, the need for highly skilled professionals, and potentially substantial capital expenditures required for building or upgrading on-premises infrastructure. Migrating workloads back on-premises often results in the development of hybrid environments or, in cases where multiple public cloud providers are used, multi-cloud environments. This shift adds complexity to managing IT infrastructure, requiring greater coordination and expertise. In public cloud environments, providers offer a wide array of managed services, automated management, and orchestration capabilities that simplify operations and reduce the burden on IT teams. When repatriating workloads, organizations must find alternatives or develop in-house solutions to replicate these functionalities. This can be time-consuming, costly, and may result in reduced capabilities compared to cloud-native offerings. As such, organizations must carefully balance the trade-offs between the advanced capabilities of cloud-native solutions and the control offered by on-premises environments. 


3 hidden benefits of Dedicated Internet Access for enterprises

DIA is designed to support bandwidth-heavy tasks such as cloud-based applications and video conferencing. It ensures seamless connectivity, helping streamline operations and prevent performance issues. Routine activities like large file sharing, backups, and data transfers are completed more efficiently, while internal communication across multiple business locations becomes smoother and more reliable. Think of DIA as your business’s private Internet highway. Unlike shared connections, it provides uninterrupted service, essential for maintaining optimal workflows and boosting productivity. For companies that rely on consistent and high-performance Internet access, DIA offers a dependable solution tailored to meet these demands. ... Fast website loading times and smooth online transactions are essential for satisfying customers. DIA helps businesses deliver a premium online experience, which can significantly improve customer loyalty. This reliable performance extends to all business locations, including branch offices. With DIA, businesses can ensure consistent, high-quality interactions with their customers—whether accessing resources or reaching out through support channels. Additionally, DIA enhances customer support by ensuring messaging services remain continuously available, allowing businesses to respond quickly and efficiently to customer needs.


Data engineering - Pryon: Turning chaos into clarity

Data Engineering is the discipline that takes raw, unstructured data and transforms it into actionable, high-value insights. Without a strong data foundation, the $10M average that 1 in 3 enterprises are spending on AI projects next year alone, are setting themselves up for failure. As data creation accelerates – 90% of the world’s data has been generated in the last two years – engineers are tasked with more than just managing it. They have to structure, organise and operationalise data so it can actually be useful and produce the right outputs. From building reliable pipelines to ensuring data quality, engineering teams play the central role in making systems that actually solve problems. ... Data synthesis is interesting, but taking action is paramount. The final step is putting it to work. Whether that means automating workflows, making real-time decisions, or delivering predictive insights, this is where the rubber meets the road. Agentic orchestration can enable systems to take the synthesised insights and act on them autonomously or with minimal human input. These engines bridge the gap between theory and practice, ensuring that your data doesn’t just sit idle – it drives measurable outcomes.


Leading with purpose: Insights from the Bhagavad Gita for modern managers

In a professional setting, the ability to manage emotions is crucial for success. A manager or individual who seeks gratification of ego and cannot regulate their emotions is likely to face challenges in achieving results. Actions driven by a sense of false ego can lead to conflicts, and misunderstandings, and ultimately hinder productivity. Such individuals may react impulsively rather than thoughtfully, allowing their emotions to cloud their judgment. When individuals learn to regulate their emotions and act from a place of calmness rather than chaos, they not only enhance their performance but also uplift those around them. A Sattvic approach to work fosters collaboration, creativity, and a shared sense of purpose. Conversely, when actions are driven by ego or excessive ambition (Tamsik), they often lead to stress and burnout. By embodying the teachings of the Gita—performing duties with dedication while remaining unattached to outcomes—individuals can achieve true mastery over their emotions. This mastery not only paves the way for personal success but also cultivates an environment where everyone can thrive together. While the entire Bhagavad Gita is replete with invaluable life lessons, these two shlokas stand out as particularly essential for effective management in the workplace. 


Accelerating HCM Cloud Implementation With RPA

Robotic Process Automation (RPA) provides a practical solution to streamline these processes. ... Many cloud platforms require Multi-Factor Authentication (MFA), which disrupts standard login routines for bots. However, we have addressed this by programmatically enabling RPA bots to handle MFA through integration with SMS or email-based OTP services. This allows seamless automation of login processes, even with additional security layers. ... It’s essential that users are assigned the correct authorizations in an HCM cloud, with ongoing maintenance of these permissions as individuals transition within the organization. Even with a well-defined scheme in place, it’s easy for someone to be shifted into a role that they shouldn’t hold. To address this challenge, we have leveraged RPA to automate the assignment of roles, ensuring adherence to least-privilege access models. ... Integrating with HCM systems through APIs often involves navigating rate limits that can disrupt workflows. To address this challenge, we implemented robust retry logic within our RPA bots, utilizing exponential backoff to gracefully handle API rate limit errors. This approach not only minimizes disruptions but also ensures that critical operations continue smoothly.


MDM and genAI: A match made in Heaven — or something less?

Despite its promising potential, AIoT faces several hurdles. One major challenge is interoperability. Many companies use IIoT devices and platforms from different manufacturers, which are not always seamlessly compatible. This complicates the implementation of integrated AIoT solutions and necessitates standardised interfaces and protocols. IIoT platforms such as Cumulocity can integrate various services and devices. A well-chosen platform facilitates the integration of new devices, enables easy scaling, and supports the flexible adaptation of an IIoT strategy. It also allows integration with other systems and technologies, such as ERP or CRM systems, thereby embedding IIoT technologies into existing business processes. Moreover, robust platforms offer specialised security features to protect connected devices from potential cybercriminal attacks. Another critical aspect is data preparation. In IoT environments, data quality is often poorer than businesses assume. Applying AI to inadequately prepared data produces subpar models that fail to deliver expected results. ... A further challenge is the skills shortage. Developing and implementing AIoT systems requires expertise in fields such as data analysis, machine learning, and cybersecurity. The demand for skilled professionals exceeds current supply, prompting companies to invest in training and development programmes.


Enterprise Architecture and Complexity

Complex architectures are characterised by attributes that make it challenging to manage using traditional project or program management methods. These architectures often have many layers, interconnected parts, variables, and dynamics that are not immediately apparent or easily understood. Complex architectures are also unpredictable (Theiss 2023)2 due to the communication and interaction required across and between the components. Managing an architecture build and deployment requires both broad and deep understanding of the interdependencies, interactions, and inherent constraints. As increasing levels of automation are deployed at scale, greater visibility and transparency is needed to understand not only the technologies and applications in play, but also the intended and unintended consequences and behaviour that they generate. Architectural artefacts and systems documentation (even if up to date) typically show elements such as nested operational processes as simple, generalised linkages and design patterns which results in greater levels of ambiguity, not clarity. They only allow us to understand in part. As systems architectures become more complex in build, capability and scope, enhanced sense-making capabilities are needed to navigate components, to ensure a coherent, adaptive systems design. 


Misinformation Is No. 1 Global Risk, Cyberespionage in Top 5

Misinformation campaigns in the form of deepfakes, synthetic voice recordings or fabricated news stories are now a leading mechanism for foreign entities to influence "voter intentions, sow doubt among the general public about what is happening in conflict zones, or tarnish the image of products or services from another country." This is especially acute in India, Germany, Brazil and the United States. Concern remains especially high following a year of the so-called "super elections," which saw heightened state-sponsored campaigns designed to manipulate public opinion.  ... Despite growing concerns, cyber resilience continues to be inadequate especially among small and mid-sized organizations, according to the report's findings. Thirty-five percent of small organizations believe their cyber resilience is inadequate, up from 5% in 2022. Many of these organizations lack the resources to invest in advanced cybersecurity measures, leaving them increasingly vulnerable to ransomware, phishing and other attacks. Seventy-one percent of cyber leaders say small organizations have already reached a "tipping point where they can no longer adequately secure themselves against the growing complexity of cyber risks." ... On one hand, AI-powered systems are proving invaluable in identifying threats, automating responses and analyzing vast amounts of data in real time.


Cloud repatriation – how to balance repatriation effectively and securely

Regardless of the reasons for making the move away from public cloud, the road to repatriation can be complex to navigate. Whether it is technical or talent issues, financial costs or compliance challenges, businesses making the switch should be prepared to spend time planning and executing an effective strategy. Within this strategy there are three areas that require special attention: observability, compliance and employing a holistic tech stack strategy. Observability is crucial in cloud repatriation because in order to move data and applications in-house, a business must understand them and how they are being used. It is only then you can ensure a smooth and effective transition. For example, there might be Shadow IT or AI that is being used by employees to get around IT policy and help them to get their work done faster. Sometimes these technologies will store data on a cloud service, so businesses need to be aware of them before making the switch. By leveraging observability, organizations can mitigate risks, optimize their infrastructure, and achieve successful repatriation that meets their strategic objectives. Compliance is also important as it is a major focus area for European and UK regulators with new and emerging regulations like DORA and NIS2 coming to the fore.


Daily Tech Digest - November 01, 2024

How CISOs can turn around low-performing cyber pros

When facing difficulties in both their professional and personal lives, people can start to withdraw and be less interested in contributing, even doing the bare minimum. They might also make mistakes more often or miss deadlines, or they can care less about how their colleagues or managers perceive their work. Body language can also provide insight into an employee’s emotional state and engagement level. When assigning tasks, Michelle Duval, founder and CEO at Marlee, a collaboration and performance AI for the workplace, looks her colleagues in the eyes. “Avoiding eye contact or visible sighing… are helpful clues,” she says. ... When it comes to helping employees improve their performance, the key point is to understand why they have problems in the first place and act quickly. “The best coaching depends on what type of problem you’re fixing,” says Caroline Ceniza-Levine, executive recruiter and career coach. “If the employee’s work product is suffering, they may need more direction or skills training. If the employee is disengaged, they may need help getting motivated – in this case, giving them more information around why their work matters and how important their contribution is may help.”


AI in Finserv: Predictive Analytics to Inclusive Banking

AI’s ability to synthesise vast amounts of data allows organisations to connect data from previously disparate sources, and then analyse it to detect historical patterns and deliver forward-looking insights. In the banking industry, this is happening at both a high level through traditional data analysis, and, increasingly, through more advanced AI tools including Natural Language Processing (NLP) and Machine Learning (ML). As organisations continue gathering these predictive analytics, many are also in the process of providing feedback to their AI systems which will ultimately improve their predictive accuracy over time. The main use case in which banks are currently seeing the biggest impact from AI-powered predictive insights is in forecasting consumer behaviour. ... AI-powered fraud detection algorithms can analyse vast amounts of transaction data in real-time at a scale that’s unattainable by humans. The real-time nature of these systems also allows organisations to prevent loss by intercepting anomalous transactions before they’re settled. This scalable, automatic approach also makes it easier for financial organisations to stay in compliance with relevant anti-money laundering (AML) and anti-terrorist financing regulations and avoid steep penalties.


Critical Software Must Drop C/C++ by 2026 or Face Risk

The federal government is heightening its warnings about dangerous software development practices, with the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Bureau of Investigation (FBI) issuing stark warnings about basic security failures that continue to plague critical infrastructure. ... The report also states that the memory safety roadmap should outline the manufacturer’s prioritized approach to eliminating memory safety vulnerabilities in priority code components. “Manufacturers should demonstrate that the memory safety roadmap will lead to a significant, prioritized reduction of memory safety vulnerabilities in the manufacturer’s products and demonstrate they are making a reasonable effort to follow the memory safety roadmap,” the report said. “There are two good reasons why businesses continue to maintain COBOL and Fortran code at scale. Cost and risk,” Shimmin told The New Stack. “It’s simply not financially possible to port millions of lines of code, nor is it a risk any responsible organization would take.” ... Finally, it is good that CISA is recommending that companies with critical software in their care should create a stated plan of attack by early 2026, Shimmin said.


Into the Wild: Using Public Data for Cyber Risk Hunting

Threat hunting, on the contrary, is a proactive approach. It means that cyber teams go out into the wild and proactively identify potential risks and threat patterns, isolating them before they can cause any harm. A threat-hunting team requires specific knowledge and skills. Therefore, it usually consists of various professionals, such as threat analysts, who analyze available data to understand and predict the attacker's behavior; incident responders, who are ready to reduce the impact of a security incident; and cybersecurity engineers, responsible for building a secure network solution capable of protecting the network from advanced threats. These teams are trained to understand their company's IT environment, gather and analyze relevant data, and identify potential threats. Moreover, they have a clear risk escalation and communication process, which helps effectively react to threats and mitigate risks. Specialists often use a combination of tools that help in threat hunting. ... Endpoint detection and response (EDR) systems combine continuous real-time monitoring and collection of end-point data with a rule-based automated response.


How to Keep IT Up and Running During a Disaster

Using IoT sensing technology can provide early warning of disaster events and keep an eye on equipment if human access to facilities is cut off. Sensors and cameras can be helpful in determining when it may be appropriate to switch operations to other facilities or back up servers. Moisture sensors, for example, can detect whether floods may be on the verge of impacting device performance. ... In disaster-prone regions, it is advisable to proactively facilitate relationships with government authorities and emergency response agencies. This can be helpful both in ensuring continued compliance and assistance in the event of a natural disaster. “There are certain aspects of [disaster response] that need to be captured,” Miller says. “A lot of times in crisis mode, that becomes a secondary focus. But [disaster management] systems allow the tracking and the recording of that information.” Being aware of deadlines for compliance reporting and being in contact with regulators if they might be missed can save money on potential fines and penalties. And notifying emergency response agencies may result in prioritization of assistance given the economic imperatives of IT continuity.


Breaking Down Data Silos With Real-Time Streaming

Traditional "extract, transform, load" and "extract, load, transform" data pipelines have historically been the primary method for moving data into analytics. But analytics consumers have often had limited control or influence over the source data model, which is typically defined by application developers in the operational domain. Data is also often stale and outdated by the time it arrives for processing. "By shifting data processing and governance, organizations can eliminate redundant pipelines, reduce the risk and impact of bad data at its source, and leverage high-quality, continuously up-to-date data assets for both operational and analytical purposes," LaForest said. Real-time data streaming is especially crucial in sectors such as finance, e-commerce and logistics, where even a few seconds of delay can negatively impact customer satisfaction and profitability. ... Real-time data streaming is emerging as the foundation for the next wave of AI innovation. For predictive AI and pattern recognition, data needs to be available in real time to drive accurate, immediate insights. Real-time data pipelines are essential for enabling AI systems to deliver smarter, faster insights and drive more accurate decision-making across the enterprise.


Is now the right time to invest in implementing agentic AI?

What makes agentic AI autonomous or able to take actions independently is its ability to interpret data, predict outcomes, and make decisions, learning from new data — unlike traditional RPA, which falters when encountering unexpected data, said Cameron Marsh, senior analyst at Nucleus research. This adaptive nature of agentic AI, according to Chada, can help enterprises increase efficiency by handling complex, variable tasks that traditional RPA can’t manage, such as the roles of a claims adjuster, a loan officer, or a case worker, provided that it has access to the necessary data, workflows, and tools required to complete the task. ... Some platform vendors are already offering low-code and no-code agent development and management platforms, but these are limited in their functionality to building simple agents or modifying templates for agents built by the vendors themselves, analysts said. “Creating more complex agents, specifically ones that require customized integrations and nuanced decision-making abilities still demands some technical understanding of data flows, machine learning model tuning, and API integrations,” Futurum’s Hinchcliffe said, adding that there is a learning curve on these platforms and that the migration journey could be resource intensive.


How open-source MDM solutions simplify cross-platform device management

Few MDM solutions effectively address the challenge of device diversity, as most are designed to manage specific hardware or software platforms. This limitation forces businesses to juggle multiple solutions to cover their entire device ecosystem. Open-source MDM solutions, however, offer flexible, modular architectures that adapt to various operating systems and device types. Open standards and extensible APIs ensure cross-platform compatibility, from mobile devices to servers to IoT endpoints. Unified management interfaces abstract platform complexities, providing consistent administration across diverse devices, while collaboration with open-source communities broadens device support. These approaches simplify management for IT teams in heterogeneous environments, reducing the need for multiple specialized solutions. ... An effective MDM solution enhances device management in remote locations by enabling developers and administrators to create lightweight agents for low-bandwidth environments and implement platform-agnostic policies for diverse ecosystems. With custom scripts and modular components, businesses can tailor management workflows to align with specific operational demands, ensuring seamless integration across various environments. 


4 Essential Strategies for Enhancing Your Application Security Posture

Whatever the cause, the torrent of false positives wastes time, lowers security team morale, and obscures real threats. As a result, risks of a major oversight increase, and response time to actual threats slows, leading to undetected breaches, data loss, financial damage, and erosion of customer trust. ... To successfully implement shifting left, AppSec must deliver solutions that eliminate the burden of manual security tasks. The ASPM strategy is to integrate tools directly into the development environment to make security checks a seamless part of the development workflow. Such integrations would provide real-time feedback and actionable security guidance, minimizing disruptions and significantly enhancing productivity. ... One of the biggest challenges in AppSec today is tool sprawl. The wide array of tools promising to plug different security gaps burdens security teams with a complex security ecosystem that locks critical data into tool-specific silos. This data fragmentation makes it impossible for security teams to gain a holistic view of the security environment, leading to confusion and missed vulnerabilities when insights from one tool don’t correlate with insights from another.


How a classical computer beat a quantum computer at its own game

Confinement is a phenomenon that can arise under special circumstances in closed quantum systems and is analogous to the quark confinement known in particle physics. To understand confinement, let's begin with some quantum basics. On quantum scales, an individual magnet can be oriented up or down, or it can be in a "superposition"—a quantum state in which it points both up and down simultaneously. How up or down the magnet is affects how much energy it has when it's in a magnetic field. ... Serendipitously, IBM had, in their initial test, set up a problem where the organization of the magnets in a closed two-dimensional array led to confinement. Tindall and Sels realized that since the confinement of the system reduced the amount of entanglement, it kept the problem simple enough to be described by classical methods. Using simulations and mathematical calculations, Tindall and Sels came up with a simple, accurate mathematical model that describes this behavior. "One of the big open questions in quantum physics is understanding when entanglement grows rapidly and when it doesn't," Tindall says. 



Quote for the day:

"The meaning of life is to find your gift. The purpose of life is to give it away." -- Anonymous