Showing posts with label tokenization. Show all posts
Showing posts with label tokenization. Show all posts

Daily Tech Digest - December 17, 2025


Quote for the day:

"Don't worry about being successful but work toward being significant and the success will naturally follow." -- Oprah Winfrey



5 key agenticops practices to start building now

“AI agents in production need a different playbook because, unlike traditional apps, their outputs vary, so teams must track outcomes like containment, cost per action, and escalation rates, not just uptime,” says Rajeev Butani, chairman and CEO of MediaMint. ... Architects, devops engineers, and security leaders should collaborate on standards for IAM and digital certificates for the initial rollout of AI agents. But expect capabilities to evolve, especially as the number of AI agents scales. As the agent workforce grows, specialized tools and configurations may be needed. ... Devops teams will need to define the minimally required configurations and standards for platform engineering, observability, and monitoring for the first AI agents deployed to production. Then, teams should monitor their vendor capabilities and review new tools as AI agent development becomes mainstream. ... Select tools and train SREs on the concepts of data lineage, provenance, and data quality. These areas will be critical to up-skilling IT operations to support incident and problem management related to AI agents. ... Leaders should define a holistic model of operational metrics for AI agents, which can be implemented using third-party agents from SaaS vendors and proprietary ones developed in-house. ... ser feedback is essential operational data that shouldn’t be left out of scope in AIops and incident management. This data not only helps to resolve issues with AI agents, but is critical for feeding back into AI agent language and reasoning models.


The great AI hype correction of 2025

The pendulum from hype to anti-hype can swing too far. It would be rash to dismiss this technology just because it has been oversold. The knee-jerk response when AI fails to live up to its hype is to say that progress has hit a wall. But that misunderstands how research and innovation in tech work. Progress has always moved in fits and starts. There are ways over, around, and under walls. Take a step back from the GPT-5 launch. It came hot on the heels of a series of remarkable models that OpenAI had shipped in the previous months, including o1 and o3 (first-of-their-kind reasoning models that introduced the industry to a whole new paradigm) and Sora 2, which raised the bar for video generation once again. That doesn’t sound like hitting a wall to me. ... Even an AGI evangelist like Ilya Sutskever, chief scientist and cofounder at the AI startup Safe Superintelligence and former chief scientist and cofounder at OpenAI, now highlights the limitations of LLMs, a technology he had a huge hand in creating. LLMs are very good at learning how to do a lot of specific tasks, but they do not seem to learn the principles behind those tasks, Sutskever said in an interview with Dwarkesh Patel in November. It’s the difference between learning how to solve a thousand different algebra problems and learning how to solve any algebra problem. “The thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people,” Sutskever said.


The future of responsible AI: Balancing innovation with ethics

Trust begins with explainability. When teams understand the reasons for a model’s behavior — the reasons behind a certain code being generated, a certain test being selected, a certain dataset being prioritized — they can validate it and fix it. Explainability matters to customers as well. Research shows that when customers are clear on when and how AI is influencing decisions, they trust the brand more. This does not require sharing the proprietary model architectures; it simply requires transparency around AI in the flow of the decision making. Another emerging pillar of trust is the responsible use of synthetic data. In sensitive privacy environments, companies are generating domain specific synthetic datasets for experimentation. The LLM (large language model) powered agents can be used in multi-agent pipelines to filter the outputs for regulatory compliance, thematic compliance and accuracy of structure — all of which help teams train/fine-tune the model without compromising data privacy. ... Responsible AI is no longer just the last step in the workflow. It’s becoming a blueprint for how teams build it, release it, and iterate on it. The future will belong to organizations that think of responsibility as a design choice, not a compliance checkbox. The goal is the same whether it’s about using synthetic data safely, validating generative code, or raising overall explainability in workflows: to create AI systems that people trust and that teams can depend on.


Thriving in the unknown future

To navigate this successfully, we understood that our first challenge was one of mindset. How could we maintain agility of thinking and resilience, while also meeting our customers anticipated needs of a specific defined product on target deadlines? Since a core of our offering is technological excellence, which ensures unmatched data accuracy, depth of insight and business predictions, how could we insist on this high level of authority, with the swirling changes all around us? We approach our work from a new point of view, and with a great deal of curiosity and imagination. ... With all the hype around AI, it is easy for our customers and our organizations to expect it to achieve… everything. But, as professionals building these tools, we know this is not the case. Many internal stakeholders and customers might not understand the difference between predictive analytics, machine learning, and generative AI, leading to misaligned expectations. ... Although our product, R&D, data science, project management and customer success teams are each independent, we work cross functionally to foster the ability for swift action and change, when needed. Engineers, data scientists and product managers work together for holistic problem-solving. These collaborations are less formalized, instituted per project or issue, so colleagues feel free to turn to each other for assistance and still can remain focused on individual projects.


Tokenization takes the lead in the fight for data security

Because tokenization preserves the structure and ordinality of the original data, it can still be used for modeling and analytics, turning protection into a business enabler. Take private health data governed by HIPAA for example: tokenization means that data canbeused to build pricing models or for gene therapy research, while remaining compliant. "If your data is already protected, you can then proliferate the usage of data across the entire enterprise and have everybody creating more and more value out of the data," Raghu said. "Conversely, if you don’t have that, there’s a lot of reticence for enterprises today to have more people access it, or have more and more AI agents access their data. Ironically, they’re limiting the blast radius of innovation. The tokenization impact is massive, and there are many metrics you could use to measure that – operational impact, revenue impact, and obviously the peace of mind from a security standpoint." ... While conventional tokenization methods can involve some complexity and slow down operations, Databolt seamlessly integrates with encrypted data warehouses, allowing businesses to maintain robust security without slowing performance or operations. Tokenization occurs in the customer’s environment, removing the need to communicate with an external network to perform tokenization operations, which can also slow performance.


Enterprises to prioritize infrastructure modernization in 2026

The rise of AI has heightened the importance of IT modernization, as many organizations are still reliant on outdated, legacy infrastructure that is ill-equipped to handle modern workload requirements, says tech solutions provider World Wide Technologies (WWT). ... A move to modernize data center infrastructure has many organizations are looking at private cloud models, according to the WWT report: “The drive toward private cloud is fueled by several needs, with one primary driver being greater data security and privacy. Industries like finance and government, which handle sensitive information, often find private cloud architectures better suited for meeting strict compliance requirements. ... There is also a move to build up network and compute abilities at the edge, Anderson noted. “Customers are not going to be able to home run all that AI data to their data center and in real time get the answers they need. They will have to have edge compute, and to make that happen, it’s going to be agents sitting out there that are talking to other agents in your central cluster. It’s going to be a very, distributed hybrid architecture, and that will require a very high speed network,” Anderson said. ... Such modernization needs to take into consideration power and cooling needs much more than ever, Anderson said. “Most of our customers are not sitting there with a lot of excess data center power; rather, most people are out of power or need to be doing more power projects to prepare for the near future,” he said.


How researchers are teaching AI agents to ask for permission the right way

Under permissioning appeared mostly with highly sensitive information. Social Security numbers, bank account details, and child names fell into this category. Participants withheld Social Security numbers almost half the time, even in tasks where the number would be necessary. The researchers noted that people often stayed cautious when the data touched on financial or identity related matters. This tension between convenience and caution opens the door to new risks when such systems move from controlled studies into production environments. Brian Sathianathan, CTO at Iterate.ai, said the risk extends far beyond the model itself. “Arguably the biggest vulnerability isn’t so much the permission system itself but the infrastructure that it all runs on. ... Accuracy alone will not solve security concerns in sensitive fields. Sathianathan said organizations need to treat permission inference as protected infrastructure. “Mitigation here, in practice, means running permission inference behind your firewall and on your hardware. You should treat it like your SIEM where things are isolated, auditable, and never outsourced to shared infrastructure. You can’t let the permission system learn from unvetted data.” ... “The paper shows that collaborative filtering can predict user preferences with high accuracy, which is good, but the challenge for regulated industries is more in ensuring that compliance requirements take precedence over learned patterns even when users would prefer otherwise.”


Bank Tech Planning 2026: What’s Real and What’s Hype?

Cybersecurity issues underpin every aspect of modern banking. With digital channels, cloud platforms and open APIs, financial institutions are exposed to increasingly sophisticated attacks, including ransomware, phishing and systemic fraud. Strong cybersecurity frameworks protect customer data, ensure regulatory compliance, and maintain operational continuity. ... Legacy core systems constrain banks’ ability to innovate, integrate with partners, and scale efficiently. Cloud-native or hybrid-core architectures provide flexibility, reduce maintenance burdens, and accelerate product delivery. By decoupling core functions from hardware limitations, banks gain resilience and the agility to respond quickly to market changes. ... Real-time payment infrastructure allows immediate settlement of transactions, eliminating delays inherent in batch processing. This capability is critical for consumer expectations, B2B cash flow, and operational efficiency. It also supports modern business needs, such as instant payroll, vendor disbursement, and high-frequency transfers. ,,, Modern banks rely on consolidated data platforms and advanced analytics to make timely, informed decisions. Predictive modeling, fraud detection and customer insights depend on high-quality, integrated data. Analytics also enables proactive risk management, operational efficiency and personalized customer experiences.


Are You a Modern Professional?

An overreliance on tech that would crimp professional development and lead to job losses. As well as holding AI to a higher ROI. “More than 90% of professionals said they believe computers should be held to higher standards of accuracy than humans,” the report notes. “About 40% said AI outputs would need to be 100% accurate before they could be used without human review, meaning that it’s still critical that humans continue to review AI-generated outputs.” ... Professionals are involved across the AI landscape—as developers, providers, deployers and users—as defined by the EU AI Act. “While this provides opportunities, it also exposes professionals to risks at every stage—from biases, hallucinations, dependencies, misuse and more,” notes Dr Florence G’Sell, professor of private law at the Cyber Policy Center at Stanford University. “Opacity complicates the situation, as it makes assessing model performance difficult. To mitigate these risks, organizations could seek independent external assessment. But developers are reluctant to provide auditors access to data sources, model weights and code. This limits the ability to evaluate and ensure compliance with responsible AI principles.” ... Uncertain regulatory issues are already taking a toll on professionals, with more than 60% of enterprises in the Asia-Pacific experiencing moderate to significant disruption to their IT operations. 


Why The Ability To Focus Will Be Crucial For Future Leaders

Focus has become a fundamental value, as noise and excess have taken over our daily routines. Every notification, interruption or sense of urgency activates our brain’s alert system, diverting energy from the prefrontal cortex, the region responsible for decision making, planning and strategic thinking. In the process, strategic vision gives way to the micro decisions of the day-to-day. This is what some neuroscientists call a "fragmented attention" state, in which the brain reacts more than it creates. For leaders, this means you become reactive rather than innovative. ... Leaders who learn to regulate their own mental operating system can gain a decisive advantage and the ability to sustain clarity amid chaos. You can start with intentional pauses throughout the day—simple practices such as deep breathing, brief walks or moments of silence. Equally important is noticing when your mind drifts and deliberately working to bring it back. ... Modern leaders often overvalue expression and undervalue absorption. Yet, from a neurobiological standpoint, silence is not the absence of thought; it’s the synchronization of neural rhythms. One study found that periods of intentional quiet—no input, no analysis, no output—can activate the prefrontal cortex and strengthen the brain’s capacity for integration. Put another way: The mind reorganizes fragments into coherence only when it’s not forced to produce. In a culture addicted to immediacy, mental silence, time to recover and intentional breaks become a competitive advantage.

Daily Tech Digest - February 20, 2025


Quote for the day:

"Increasingly, management's role is not to organize work, but to direct passion and purpose." -- Greg Satell


The Business Case for Network Tokenization in Payment Ecosystems

Network tokenization replaces sensitive Primary Account Numbers with tokens, rendering stolen data useless to fraudsters and addressing a major area of fraud: online payments. "Fraud rates are seven times higher online than in physical stores, as criminals exploit exposed card numbers," Mastercard's chief digital officer Pablo Fourez told Information Security Media Group. Shifting to tokenization protects businesses from financial losses and safeguards reputation and customer trust. ... But adoption of network tokenization does come with challenges including issuer readiness, regulatory hurdles and inconsistent implementations. Integrating network tokenization across multiple card networks requires multiple integrations, ensuring interoperability and maintaining high security standards, Fourez said. Compliance with varying regulatory requirements and achieving scalability without performance issues can be resource-intensive, he said. Ramakrishnan points to delays in token provisioning that may slow the speed of transactions if the technology is not scalable. Situations in which one entity in the payment ecosystem does not use network tokens can be major failure points that can lead to transaction failure and cart abandonment.


The hidden gap in cyber recovery: What happens when roles and processes are overlooked

There’s a big difference between disaster recovery (DR) and cyber recovery. For DR, infrastructure and backup teams are the central players and an organization can be up and running in no time. Cyber recovery, however, involves the entire business — backup teams, network teams, cloud personnel, incident response teams from security, teams that are validating the active directory before restores, as well as the application owners and business owners that depend on those functions. ... “There are bigger questions that you only get to by testing your process,” Grantham says. “Whatever your business is, it’s about looking at that data and saying, how do I provide access in this modified environment? For every one of the applications supporting that, having a run book to say, this is the people, the process, linked to the technology to get me to a user in the system performing their daily function because they need to be able to do their job. That run book gets them there. If your data is just sitting on a hard drive in the middle of a data center, how does that help your business?” ... “The idea that cyber recovery strategies require continual evolution, just like zero trust is an evolution of different identity standards, is not something that a lot of businesses have accepted yet,” Grantham says. 


Microsoft Makes Quantum Computing Breakthrough With New Chip

While it’s been working on its own quantum computing hardware, Microsoft has also been building out a quantum computing stack, with its Q# development language and quantum algorithms that can run on the quantum hardware from IonQ, Pasqal, Quantinuum, QCI, and Rigetti that’s available through Azure — but the most powerful systems so far are still in the 20-30 qubit range. ... A prototype fault-tolerant quantum computer will be available “in years, not decades,” promised Chetan Nayak, Microsoft’s VP of quantum hardware. The potential of topological qubits is why DARPA announced earlier this month that Microsoft is one the first two companies to be invited to join its rigorous program for investigating whether it’s possible to build a useful quantum computer — where the value of the computing it can do is worth more than what it costs to build and run — by 2033, using what the agency calls underexplored systems. ... Initially, there are just eight physical qubits in the Majorana 1 QPU, which Microsoft can assign in different ways to get the number of logical qubits it wants. Calling it a QPU is a reminder that there will probably be a lot of different kinds of quantum computer, and that researchers will pick the one that suits them — like choosing a different GPU for a specific workload.


CISO Conversations: Kevin Winter at Deloitte and Richard Marcus at AuditBoard

A CISO can only be as good as the security team. Assembling a strong team requires good selection and effective management: that is, who do you recruit, and how do you maintain top efficiency? Recruitment is a balance between multiple individual rock stars and a single cohesive team. That’s a personal choice for each CISO, but usually involves a compromise: the best possible individuals with the widest possible range of diversity that will still make a single team. Having recruited the team, the CISO must help them excel both as individuals and one team. “I love the Japanese concept of ‘ikigai’,” said Marcus. Ikigai can be defined as finding your life’s purpose – the meeting point of personal passion, skills, mission, and vocation. “I think you need to deliver an experience for the security team that checks all these boxes. They need to have interesting problems. They need to be using modern technology with some autonomy over what they use. You need to provide a sense of purpose – that what they’re doing is not just about the immediate technical work, but will have a broader impact on the company, the industry, and the world at large. And of course, you must pay them what they’re worth. I think if you do all these things, you’ll have a very happy and motivated and engaged team.”


Will AI destroy human creativity? No - and here's why

Today's AI models do more than automate. They engage. They understand user input conversationally, simulate thought processes, and adapt to preferences. AI's ability to adapt comes from machine learning constantly improving by analyzing huge amounts of data. This has made AI smarter and easier for people and businesses to use. The impact is undeniable in creative industries as AI tools can design logos, generate intricate artwork, and write compelling narratives, offering creators new possibilities. These advancements are transforming how people work, create, and innovate. Generative AI is now the focus of business strategies, with companies using these technologies to enhance efficiency and engage with their audiences in new ways. ... That said, the role of human creativity isn't being erased; it's evolving. Perhaps the designers and writers of tomorrow aren't disappearing but transforming into prompt engineers and crafting ideas in collaboration with these tools, mastering a new kind of artistry. Let's face it: Just because AI creates something doesn't mean it's good. The ability to discern, curate, and refine that intangible "eye" for greatness will always remain profoundly human. Unless, of course, Skynet becomes a reality.


Unknown and unsecured: The risks of poor asset visibility

Asset visibility remains a critical issue because organizations often lack a real-time, unified view of their IT, OT, and cloud environments. Shadow IT, unmanaged endpoints, remote work and third-party integrations create blind spot which increases attack vectors. Without complete visibility, security teams struggle to detect and respond to threats effectively, leaving organizations vulnerable to breaches and compromises. Good visibility across enterprise assets is no longer just a nice to have, it’s a necessity to survive in the digital world. ... Improving visibility of digital assets is critical for all organizations, otherwise, blind spots will exist in networks which criminals can exploit. Organizations must treat every endpoint as a potential entry point, ensuring it is seen and secured. It’s also important to remember that perfect technology doesn’t exist, vulnerabilities will always surface in products, so organizations must not only have an inventory of their assets, but also the ability to apply patches and security updates automatically, without necessarily having to pull all systems down. Improving OT visibility requires a specialised approach due to the sensitive nature of legacy and ICS systems.


Hacking Cybersecurity Leadership

Cybersecurity culture often fosters a sense of individualism that lends itself to operating in isolation—individual interest in areas of cybersecurity lead to individually-driven projects, individual certifications, etc. That being said, being siloed is not a sustainable mode of operation. For most cyber professionals, the challenges are too complex to resolve individually and negative experiences (failure, shame, guilt, embarrassment, etc.), when experienced alone, are likely to take an even greater toll than when those experiences are shared with others. ... In order to boost a sense of competence at the individual level, leaders need to create a learning-oriented environment that provides opportunities for individuals to explore, gather, and practice applying new information. There are specific strategies to build or strengthen these aspects of the work environment. ... Leaders can also embrace a growth-mindset culture whereby mistakes do not equate to failures; rather, mistakes are repositioned as learning opportunities to develop and grow. This allows individuals to safely explore and practice various aspects of their work. It’s important to note that this approach also requires a shift toward more developmental, rather than punitive or evaluative, feedback.


Real-World AppSec Priorities Observed in BSIMM15

Many organizations are still in the nascent stages of defining AI-specific attack surfaces and integrating security mechanisms. To stay ahead of these emerging risks, organizations should proactively gather intelligence on AI-related threats, establish secure design patterns for AI models, and ensure that AI security is seamlessly integrated into existing policies and frameworks. Proactivity is key here — a well-rounded strategy to leverage the potential AI can offer must be accompanied by strategic approaches to counter risks and threats it introduces. The use of adversarial testing, which involves simulating potential attacks to identify vulnerabilities, has more than doubled over the past year. This trend indicates a growing recognition among companies of the importance of continuously testing AI models to prevent them from being exploited by malicious actors. While it is not yet possible to definitively attribute the rise in these BSIMM activities to AI-specific concerns, it is evident that these practices will play a crucial role in addressing the emerging risks associated with AI. ... The decline does raise a red flag around the preparedness of organizations to defend against the evolving threat landscape. It also illustrates a need for security education and awareness initiatives. 


Why Best-of-Breed Security Is Non-Negotiable for SIEM

With cyber threats evolving at an unprecedented pace, security leaders can no longer afford to treat SIEM as just another layer in a bloated security stack. Instead, they must take a strategic approach, ensuring that their SIEM leverages truly best-of-breed security—one that enhances integration, streamlines operations, and delivers actionable threat intelligence. So, is more always better? Or is it time to redefine what best-of-breed really means for SIEM? ... The appeal of best-of-breed security is clear: superior threat detection, deeper visibility, and greater flexibility to adapt to evolving threats. However, this approach also introduces complexity. Managing multiple vendors, ensuring seamless integration, and avoiding operational inefficiencies can quickly become overwhelming. So, how do security leaders strike the right balance? Success lies in strategic selection, integration, and optimization—choosing tools that complement each other and enhance Security Information and Event Management (SIEM) rather than adding more noise. Adopting a best-of-breed security approach within a SIEM framework offers several advantages. By integrating specialized security solutions, organizations can optimize threat detection, improve agility, and reduce reliance on a single vendor. 


Digital twins and transitioning to a greener, safer industrial sector

Shah finds the term digital twins is often misunderstood. “Digital twins are not a single technology and standalone solution, but a strategic framework – one that combines and leverages multiple technologies. This can include AI, reality capture, 3D reality models and advanced web technologies which create a virtual 3D replica of an industrial site and its facilities.” Aiming to be the first climate-neutral continent by 2050, Europe has set some aspirational goals and according to Shah, digital twins could be a real game-changer in how the world could future-proof its industrial sites and transition to net zero. ... She noted many industrial sites struggle with issues related to technical documents and on the ground conditions, and this is an issue because inaccurate information can cause accidents to occur. AI and 3D rendered models enable experts to envision a scene in real time, allowing for greater accuracy than is often permitted by a physical walk-through of a facility. “What’s more, site personnel can also simulate processes like ‘lockout tagout’ safely, where machines are isolated and shut down for maintenance, without real-world risks and predict what could go wrong if an asset was isolated incorrectly, for example.