Showing posts with label entrepreneur. Show all posts
Showing posts with label entrepreneur. Show all posts

Daily Tech Digest - September 07, 2025


Quote for the day:

"The struggle you're in today is developing the strength you need for tomorrow." -- #Soar2Success


The Automation Bottleneck: Why Data Still Holds Back Digital Transformation

Even in firms with well-funded digital agendas, legacy system sprawl is an ongoing headache. Data lives in silos, formats vary between regions and business units, and integration efforts can stall once it becomes clear just how much human intervention is involved in daily operations. Elsewhere, the promise of straight-through processing clashes with manual workarounds, from email approvals and spreadsheet imports to ad hoc scripting. Rather than symptoms of technical debt, these gaps point to automation efforts that are being layered on top of brittle foundations. Until firms confront the architectural and operational barriers that keep data locked in fragmented formats, automation will also remain fragmented. Yes, it will create efficiency in isolated functions, but not across end-to-end workflows. And that’s an unforgiving limitation in capital markets where high trade volumes, vast data flows, and regulatory precision are all critical. ... What does drive progress are purpose-built platforms that understand the shape and structure of industry data from day one, moving, enriching, validating, and reformatting it to support the firm’s logic. Reinventing the wheel for every process isn’t necessary, but firms do need to acknowledge that, in financial services, data transformation isn’t some random back-office task. It’s a precondition for the type of smooth and reliable automation that prepares firms for the stark demands of a digital future.


Switching on resilience in a post-PSTN world

The copper PSTN network, first introduced in the Victorian era, was never built for the realities of today’s digital world. The PSTN was installed in the early 80s, and early broadband was introduced using the same lines in the early 90s. And the truth is, it needs to retire, having operated past its maintainable life span. Modern work depends on real-time connectivity and data-heavy applications, with expectations around speed, scalability, and reliability that outpace the capabilities of legacy infrastructure. ... Whether it’s a GP retrieving patient records or an energy network adjusting supply in real time, their operations depend on uninterrupted, high-integrity access to cloud systems and data center infrastructure. That’s why the PSTN switch-off must be seen not as a Telecoms milestone, but as a strategic resilience imperative. Without universal access upgrades, even the most advanced data centers can’t fulfil their role. The priority now is to build a truly modern digital backbone. One that gives homes, businesses, and CNI facilities alike robust, high-speed connectivity into the cloud. This is about more than retiring copper. It’s about enabling a smarter, safer, more responsive nation. Organizations that move early won’t just minimize risk, they’ll unlock new levels of agility, performance, and digital assurance.


Neither driver, nor passenger — covenantal co-creator

The covenantal model rests on a deeper premise: that intelligence itself emerges not just from processing information, but from the dynamic interaction between different perspectives. Just as human understanding often crystallizes through dialogue with others, AI-human collaboration can generate insights that exceed what either mind achieves in isolation. This isn't romantic speculation. It's observable in practice. When human contextual wisdom meets AI pattern recognition in genuine dialogue, new possibilities emerge. When human ethical intuition encounters AI systematic analysis, both are refined. When human creativity engages with AI synthesis, the result often transcends what either could produce alone. ... Critics will rightfully ask: How do we distinguish genuine partnership from sophisticated manipulation? How do we avoid anthropomorphizing systems that may simulate understanding without truly possessing it? ... The real danger isn't just AI dependency or human obsolescence. It's relational fragmentation — isolated humans and isolated AI systems operating in separate silos, missing the generative potential of genuine collaboration. What we need isn't just better drivers or more conscious passengers. We need covenantal spaces where human and artificial minds can meet as genuine partners in the work of understanding.


Facial recognition moves into vehicle lanes at US borders

According to the PTA, VBCE relies on a vendor capture system embedded in designated lanes at land ports of entry. As vehicles approach the primary inspection lane, high-resolution cameras capture facial images of occupants through windshields and side windows. The images are then sent to the VBCE platform where they are processed by a “vendor payload service” that prepares the files for CBP’s backend systems. Each image is stored temporarily in Amazon Web Services’ S3 cloud storage, accompanied by metadata and quality scores. An image-quality service assesses whether the photo is usable while an “occupant count” algorithm tallies the number of people in the vehicle to measure capture rates. A matching service then calls CBP’s Traveler Verification Service (TVS) – the central biometric database that underpins Simplified Arrival – to retrieve “gallery” images from government holdings such as passports, visas, and other travel documents. The PTA specifies that an “image purge service” will delete U.S. citizen photos once capture and quality metrics are obtained, and that all images will be purged when the evaluation ends. Still, during the test phase, images can be retained for up to six months, a far longer window than the 12-hour retention policy CBP applies in operational use for U.S. citizens.


Quantum Computing Meets Finance

Many financial-asset-pricing problems boil down to solving integral or partial differential equations. Quantum linear algebra can potentially speed that up. But the solution is a quantum state. So, you need to be creative about capturing salient properties of the numerical solution to your asset-pricing model. Additionally, pricing models are subject to ambiguity regarding sources of risk—factors that can adversely affect an asset’s value. Quantum information theory provides tools for embedding notions of ambiguity. ... Recall that some of the pioneering research on quantum algorithms was done in the 1990s by scientists like Deutsch, Shor, and Vazirani, among others. Today it’s still a challenge to implement their ideas with current hardware, and that’s three decades later. But besides hardware, we need progress on algorithms—there’s been a bit of a quantum algorithm winter. ... Optimization tasks across industries, including computational chemistry, materials science, and artificial intelligence, are also applied in the financial sector. These optimization algorithms are making progress. In particular, the ones related to quantum annealing are the most reliable scaled hardware out there. ... The most well-known case is portfolio allocation. You have to translate that into what’s known as quadratic unconstrained binary optimization, which means making compromises to maintain what you can actually compute. 


Beyond IT: How Today’s CIOs Are Shaping Innovation, Strategy and Security

It’s no longer acceptable to measure success by uptime or ticket resolution. Your worth is increasingly measured by your ability to partner with business units, translate their needs into scalable technology solutions and get those solutions to market quickly. That means understanding not just the tech, but the business models, revenue drivers and customer expectations. You don’t need to be an expert in marketing or operations, but you need to know how your decisions in architecture, tooling, and staffing directly impact their outcomes. ... Security and risk management are no longer checkboxes handled by a separate compliance team. They must be embedded into the DNA of your tech strategy. Becky refers to this as “table stakes,” and she’s right. If you’re not building with security from the outset, you’re building on sand. That starts with your provisioning model. We’re in a world where misconfigurations can take down global systems. Automated provisioning, integrated compliance checks and audit-ready architectures are essential. Not optional. ... CIOs need to resist the temptation to chase hype. Your core job is not to implement the latest tools. Your job is to drive business value and reduce complexity so your teams can move fast, and your systems remain stable. The right strategy? Focus on the essentials: Automated provisioning, integrated security and clear cloud cost governance. 


The Difference Between Entrepreneurs Who Survive Crises and Those Who Don't

Among the most underrated strategies for protecting reputation, silence holds a special place. It is not passivity; it's an intentional, active choice. Deciding not to react immediately to a provocation buys time to think, assess and respond surgically. Silence has a precise psychological effect: It frustrates your attacker, often pushing them to overplay their hand and make mistakes. This dynamic is well known in negotiation — those who can tolerate pauses and gaps often control the rhythm and content of the exchange. ... Anticipating negative scenarios is not pessimism — it's preparation. It means knowing ahead of time which actions to avoid and which to take to safeguard credibility. As Eccles, Newquist, and Schatz note in Harvard Business Review, a strong, positive reputation doesn't just attract top talent and foster customer loyalty — it directly drives higher pricing power, market valuation and investor confidence, making it one of the most valuable yet vulnerable assets in a company's portfolio. ... Too much exposure without a solid reputation makes an entrepreneur vulnerable and easily manipulated. Conversely, those with strong credibility maintain control even when media attention fades. In the natural cycle of public careers, popularity always diminishes over time. What remains — and continues to generate opportunities — is reputation. 


Ship Faster With 7 Oddly Specific devops Habits

PowerPoint can lie; your repo can’t. If “it works on my machine” is still a common refrain, we’ve left too much to human memory. We make “done” executable. Concretely, we put a Makefile (or a tiny task runner) in every repo so anyone—developer, SRE, or manager who knows just enough to be dangerous—can run the same steps locally and in CI. The pattern is simple: a single entry point to lint, test, build, and package. That becomes the contract for the pipeline. ... Pipelines shouldn’t feel like bespoke furniture. We keep a single “paved path” workflow that most repos can adopt unchanged. The trick is to keep it boring, fast, and self-explanatory. Boring means a sane default: lint, test, build, and publish on main; test on pull requests; cache aggressively; and fail clearly. Fast means smart caching and parallel jobs. Self-explanatory means the pipeline tells you what to do next, not just that you did it wrong. When a team deviates, they do it consciously and document why. Most of the time, they come back to the path once they see the maintenance cost of custom tweaks. ... A release isn’t done until we can see it breathing. We bake observability in before the first customer ever sees the service. That means three things: usable logs, metrics with labels that match our domain (not just infrastructure), and distributed traces. On top of those, we define one or two Service Level Objectives with clear SLIs—usually success rate and latency. 


Kali Linux vs Parrot OS – Which Penetration Testing Platform is Most Suitable for Cybersecurity Professionals?

Kali Linux ships with over 600 pre-installed penetration testing tools, carefully curated to cover the complete spectrum of security assessment activities. The toolset spans multiple categories, including network scanning, vulnerability analysis, exploitation frameworks, digital forensics, and post-exploitation utilities. Notable tools include the Metasploit Framework for exploitation testing, Burp Suite for web application security assessment, Nmap for network discovery, and Wireshark for protocol analysis. The distribution’s strength lies in its comprehensive coverage of penetration testing methodologies, with tools organized into logical categories that align with industry-standard testing procedures. The inclusion of cutting-edge tools such as Sqlmc for SQL injection testing, Sprayhound for password spraying integrated with Bloodhound, and Obsidian for documentation purposes demonstrates Kali’s commitment to addressing evolving security challenges. ... Parrot OS distinguishes itself through its holistic approach to cybersecurity, offering not only penetration testing tools but also integrated privacy and anonymity features. The distribution includes over 600 tools covering penetration testing, digital forensics, cryptography, and privacy protection. Key privacy tools include Tor Browser, AnonSurf for traffic anonymization, and Zulu Crypt for encryption operations.


How Artificial Intelligence Is Reshaping Cybersecurity Careers

AI-Enhanced SOC Analysts upends traditional security operations, where analysts leverage artificial intelligence to enhance their threat detection and incident response capabilities. These positions work with the existing analyst platforms that are capable of autonomous reasoning that mimics expert analyst workflows, correlating evidence, reconstructing timelines, and prioritizing real threats at a much faster rate. ... AI Risk Analysts and Governance Specialists ensure responsible AI deployment through risk assessments and adherence to compliance frameworks. Professionals in this role may hold a certification like the AIGP. This certification demonstrates that the holder can ensure safety and trust in the development and deployment of ethical AI and ongoing management of AI systems. This role requires foundational knowledge of AI systems and their use cases, the impacts of AI, and comprehension of responsible AI principles. ... AI Forensics Specialists represent an emerging role that combines traditional digital forensics with AI-specific environments and technology. This role is designed to analyze model behavior, trace adversarial attacks, and provide expert testimony in legal proceedings involving AI systems. While classic digital forensics focuses on post-incident investigations, preserving evidence and chain of custody, and reconstructing timelines, AI forensics specialists must additionally possess knowledge of machine learning algorithms and frameworks.

Daily Tech Digest - August 31, 2025


Quote for the day:

“Our chief want is someone who will inspire us to be what we know we could be.” -- Ralph Waldo Emerson



A Brief History of GPT Through Papers

The first neural network based language translation models operated in three steps (at a high level). An encoder would embed the “source statement” into a vector space, resulting in a “source vector”. Then, the source vector would be mapped to a “target vector” through a neural network and finally a decoder would map the resulting vector to the “target statement”. People quickly realized that the vector that was supposed to encode the source statement had too much responsibility. The source statement could be arbitrarily long. So, instead of a single vector for the entire statement, let’s convert each word into a vector and then have an intermediate element that would pick out the specific words that the decoder should focus more on. ... The mechanism by which the words were converted to vectors was based on recurrent neural networks (RNNs). Details of this can be obtained from the paper itself. These recurrent neural networks relied on hidden states to encode the past information of the sequence. While it’s convenient to have all that information encoded into a single vector, it’s not good for parallelizability since that vector becomes a bottleneck and must be computed before the rest of the sentence can be processed. ... The idea is to give the model demonstrative examples at inference time as opposed to using them to train its parameters. If no such examples are provided in-context, it is called “zero shot”. If one example is provided, “one shot” and if a few are provided, “few shot”.


8 Powerful Lessons from Robert Herjavec at Entrepreneur Level Up That Every Founder Needs to Hear

Entrepreneurs who remain curious — asking questions and seeking insights — often discover pathways others overlook. Instead of dismissing a "no" or a difficult response, Herjavec urged attendees to look for the opportunity behind it. Sometimes, the follow-up question or the willingness to listen more deeply is what transforms rejection into possibility. ... while breakthrough innovations capture headlines, the majority of sustainable businesses are built on incremental improvements, better execution and adapting existing ideas to new markets. For entrepreneurs, this means it's okay if your business doesn't feel revolutionary from day one. What matters is staying committed to evolving, improving and listening to the market. ... setbacks are inevitable in entrepreneurship. The real test isn't whether you'll face challenges, but how you respond to them. Entrepreneurs who can adapt — whether by shifting strategy, reinventing a product or rethinking how they serve customers — are the ones who endure. ... when leaders lose focus, passion or clarity, the organization inevitably follows. A founder's vision and energy cascade down into the culture, decision-making and execution. If leaders drift, so does the company. For entrepreneurs, this is a call to self-reflection. Protect your clarity of purpose. Revisit why you started. And remember that your team looks to you not just for direction, but for inspiration. 


The era of cheap AI coding assistants may be over

Developers have taken to social media platforms and GitHub to express their dissatisfaction over the pricing changes, especially across tools like Claude Code, Kiro, and Cursor, but vendors have not adjusted pricing or made any changes that significantly reduce credits consumption. Analysts don’t see any alternative to reducing the pricing of these tools. "There’s really no alternative until someone figures out the following: how to use cheaper but dumber models than Claude Sonnet 4 to achieve the same user experience and innovate on KVCache hit rate to reduce the effective price per dollar,” said Wei Zhou, head of AI utility research at SemiAnalysis. Considering the market conditions, CIOs and their enterprises need to start absorbing the cost and treat vibe coding tools as a productivity expense, according to Futurum’s Hinchcliffe. “CIOs should start allocating more budgets for vibe coding tools, just as they would do for SaaS, cloud storage, collaboration tools or any other line items,” Hinchcliffe said. “The case of ROI on these tools is still strong: faster shipping, fewer errors, and higher developer throughput. Additionally, a good developer costs six figures annually, while vibe coding tools are still priced in the low-to-mid thousands per seat,” Hinchcliffe added. ... “Configuring assistants to intervene only where value is highest and choosing smaller, faster models for common tasks and saving large-model calls for edge cases could bring down expenditure,” Hinchcliffe added.


AI agents need intent-based blockchain infrastructure

By integrating agents with intent-centric systems, however, we can ensure users fully control their data and assets. Intents are a type of building block for decentralized applications that give users complete control over the outcome of their transactions. Powered by a decentralized network of solvers, agentic nodes that compete to solve user transactions, these systems eliminate the complexity of the blockchain experience while maintaining user sovereignty and privacy throughout the process. ... Combining AI agents and intents will redefine the Web3 experience while keeping the space true to its core values. Intents bridge users and agents, ensuring the UX benefits users expect from AI while maintaining decentralization, sovereignty and verifiability. Intent-based systems will play a crucial role in the next phase of Web3’s evolution by ensuring agents act in users’ best interests. As AI adoption grows, so does the risk of replicating the problems of Web2 within Web3. Intent-centric infrastructure is the key to addressing both the challenges and opportunities that AI agents bring and is necessary to unlock their full potential. Intents will be an essential infrastructure component and a fundamental requirement for anyone integrating or considering integrating AI into DeFi. Intents are not merely a type of UX upgrade or optional enhancement. 


The future of software development: To what can AI replace human developers?

Rather than replacing developers, AI is transforming them into higher-level orchestrators of technology. The emerging model is one of human-AI collaboration, where machines handle the repetitive scaffolding and humans focus on design, strategy, and oversight. In this new world, developers must learn not just to write code, but to guide, prompt, and supervise AI systems. The skillset is expanding from syntax and logic to include abstraction, ethical reasoning, systems thinking, and interdisciplinary collaboration. In other words, AI is not making developers obsolete. It is making new demands on their expertise. ... This shift has significant implications for how we educate the next generation of software professionals. Beyond coding languages, students will need to understand how to evaluate AI- AI-generated output, how to embed ethical standards into automated systems, and how to lead hybrid teams made up of both humans and machines. It also affects how organisations hire and manage talent. Companies must rethink job descriptions, career paths, and performance metrics to account for the impact of AI-enabled development. Leaders must focus on AI literacy, not just technical competence. Professionals seeking to stay ahead of the curve can explore free programs, such as The Future of Software Engineering Led by Emerging Technologies, which introduces the evolving role of AI in modern software development.


Open Data Fabric: Rethinking Data Architecture for AI at Scale

The first principle, unified data access, ensures that agents have federated real-time access across all enterprise data sources without requiring pipelines, data movement, or duplication. Unlike human users who typically work within specific business domains, agents often need to correlate information across the entire enterprise to generate accurate insights. ... The second principle, unified contextual intelligence, involves providing agents with the business and technical understanding to interpret data correctly. This goes far beyond traditional metadata management to include business definitions, domain knowledge, usage patterns, and quality indicators from across the enterprise ecosystem. Effective contextual intelligence aggregates information from metadata, data catalogs, business glossaries, business intelligence tools, and tribal knowledge into a unified layer that agents can access in real-time.  ... Perhaps the most significant principle involves establishing collaborative self-service. This is a significant shift as it means moving from static dashboards and reports to dynamic, collaborative data products and insights that agents can generate and share with each other. The results are trusted “data answers,” or conversational, on-demand data products for the age of AI that include not just query results but also the business context, methodology, lineage, and reasoning that went into generating them.


A Simple Shift in Light Control Could Revolutionize Quantum Computing

A research collaboration led by Vikas Remesh of the Photonics Group at the Department of Experimental Physics, University of Innsbruck, together with partners from the University of Cambridge, Johannes Kepler University Linz, and other institutions, has now demonstrated a way to bypass these challenges. Their method relies on a fully optical process known as stimulated two-photon excitation. This technique allows quantum dots to emit streams of photons in distinct polarization states without the need for electronic switching hardware. In tests, the researchers successfully produced high-quality two-photon states while maintaining excellent single-photon characteristics. ... “The method works by first exciting the quantum dot with precisely timed laser pulses to create a biexciton state, followed by polarization-controlled stimulation pulses that deterministically trigger photon emission in the desired polarization,” explain Yusuf Karli and Iker Avila Arenas, the study’s first authors. ... “What makes this approach particularly elegant is that we have moved the complexity from expensive, loss-inducing electronic components after the single photon emission to the optical excitation stage, and it is a significant step forward in making quantum dot sources more practical for real-world applications,” notes Vikas Remesh, the study’s lead researcher.


AI and the New Rules of Observability

The gap between "monitoring" and true observability is both cultural and technological. Enterprises haven't matured beyond monitoring because old tools weren't built for modern systems, and organizational cultures have been slow to evolve toward proactive, shared ownership of reliability. ... One blind spot is model drift, which occurs when data shifts, rendering its assumptions invalid. In 2016, Microsoft's Tay chatbot was a notable failure due to its exposure to shifting user data distributions. Infrastructure monitoring showed uptime was fine; only semantic observability of outputs would have flagged the model's drift into toxic behavior. Hidden technical debt or unseen complexity in code can undermine observability. In machine learning, or ML, systems, pipelines often fail silently, while retraining processes, feature pipelines and feedback loops create fragile dependencies that traditional monitoring tools may overlook. Another issue is "opacity of predictions." ... AI models often learn from human-curated priorities. If ops teams historically emphasized CPU or network metrics, the AI may overweigh those signals while downplaying emerging, equally critical patterns - for example, memory leaks or service-to-service latency. This can occur as bias amplification, where the model becomes biased toward "legacy priorities" and blind to novel failure modes. Bias often mirrors reality.


Dynamic Integration for AI Agents – Part 1

An integration of components within AI differs from an integration between AI agents. The former relates to integration with known entities that form a deterministic model of information flow. The same relates to inter-application, inter-system and inter-service transactions required by a business process at large. It is based on mapping of business functionality and information (an architecture of the business in organisations) onto available IT systems, applications, and services. The latter shifts the integration paradigm since the very AI Agents decide that they need to integrate with something at runtime based on the overlapping of the statistical LLM and available information, which contains linguistic ties unknown even in the LLM training. That is, an AI Agent does not know what a counterpart — an application, another AI Agent or data source — it would need to cooperate with to solve the overall task given to it by its consumer/user. The AI Agent does not know even if the needed counterpart exists. ... Any AI Agent may have its individual owner and provider. These owners and providers may be unaware of each others and act independently when creating their AI Agents. No AI Agent can be self-sufficient due to its fundamental design — it depends on the prompts and real-world data at runtime. It seems that the approaches to integration and the integration solutions differ for the humanitarian and natural science spheres.


Counteracting Cyber Complacency: 6 Security Blind Spots for Credit Unions

Organizations that conduct only basic vendor vetting lack visibility into the cybersecurity practices of their vendors’ subcontractors. This creates gaps in oversight that attackers can exploit to gain access to an institution’s data. Third-party providers often have direct access to critical systems, making them an attractive target. When they’re compromised, the consequences quickly extend to the credit unions they serve. ... Cybercriminals continue to exploit employee behavior as a primary entry point into financial institutions. Social engineering tactics — such as phishing, vishing, and impersonation — bypass technical safeguards by manipulating people. These attacks rely on trust, familiarity, or urgency to provoke an action that grants the attacker access to credentials, systems, or internal data. ... Many credit unions deliver cybersecurity training on an annual schedule or only during onboarding. These programs often lack depth, fail to differentiate between job functions, and lose effectiveness over time. When training is overly broad or infrequent, staff and leadership alike may be unprepared to recognize or respond to threats. The risk is heightened when the threats are evolving faster than the curriculum. TruStage advises tailoring cyber education to the institution’s structure and risk profile. Frontline staff who manage member accounts face different risks than board members or vendors. 

Daily Tech Digest - August 22, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy


Leveraging DevOps to accelerate the delivery of intelligent and autonomous care solutions

Fast iteration and continuous delivery have become standard in industries like e-commerce and finance. Healthcare operates under different rules. Here, the consequences of technical missteps can directly affect care outcomes or compromise sensitive patient information. Even a small configuration error can delay a diagnosis or impact patient safety. That reality shifts how DevOps is applied. The focus is on building systems that behave consistently, meet compliance standards automatically, and support reliable care delivery at every step. ... In many healthcare environments, developers are held back by slow setup processes and multi-step approvals that make it harder to contribute code efficiently or with confidence. This often leads to slower cycles and fragmented focus. Modern DevOps platforms help by introducing prebuilt, compliant workflow templates, secure self-service provisioning for environments, and real-time, AI-supported code review tools. In one case, development teams streamlined dozens of custom scripts into a reusable pipeline that provisioned compliant environments automatically. The result was a noticeable reduction in setup time and greater consistency across projects. Building on this foundation, DevOps also play a vital role in development and deployment of the Machine Learning Models. 


Tackling the DevSecOps Gap in Software Understanding

The big idea in DevSecOps has always been this: shift security left, embed it early and often, and make it everyone’s responsibility. This makes DevSecOps the perfect context for addressing the software understanding gap. Why? Because the best time to capture visibility into your software’s inner workings isn’t after it’s shipped—it’s while it’s being built. ... Software Bill of Materials (SBOMs) are getting a lot of attention—and rightly so. They provide a machine-readable inventory of every component in a piece of software, down to the library level. SBOMs are a baseline requirement for software visibility, but they’re not the whole story. What we need is end-to-end traceability—from code to artifact to runtime. That includes:Component provenance: Where did this library come from, and who maintains it? Build pipelines: What tools and environments were used to compile the software? Deployment metadata: When and where was this version deployed, and under what conditions? ... Too often, the conversation around software security gets stuck on source code access. But as anyone in DevSecOps knows, access to source code alone doesn’t solve the visibility problem. You need insight into artifacts, pipelines, environment variables, configurations, and more. We’re talking about a whole-of-lifecycle approach—not a repo review.


Navigating the Legal Landscape of Generative AI: Risks for Tech Entrepreneurs

The legal framework governing generative AI is still evolving. As the technology continues to advance, the legal requirements will also change. Although the law is still playing catch-up with the technology, several jurisdictions have already implemented regulations specifically targeting AI, and others are considering similar laws. Businesses should stay informed about emerging regulations and adapt their practices accordingly. ... Several jurisdictions have already enacted laws that specifically govern the development and use of AI, and others are considering such legislation. These laws impose additional obligations on developers and users of generative AI, including with respect to permitted uses, transparency, impact assessments and prohibiting discrimination. ... In addition to AI-specific laws, traditional data privacy and security laws – including the EU General Data Protection Regulation (GDPR) and U.S. federal and state privacy laws – still govern the use of personal data in connection with generative AI. For example, under GDPR the use of personal data requires a lawful basis, such as consent or legitimate interest. In addition, many other data protection laws require companies to disclose how they use and disclose personal data, secure the data, conduct data protection impact assessments and facilitate individual rights, including the right to have certain data erased. 


Five ways OSINT helps financial institutions to fight money laundering

By drawing from public data sources available online, such as corporate registries and property ownership records, OSINT tools can provide investigators with a map of intricate corporate and criminal networks, helping them unmask UBOs. This means investigators can work more efficiently to uncover connections between people and companies that they otherwise might not have spotted. ... External intelligence can help analysts to monitor developments, so that newer forms of money laundering create fewer compliance headaches for firms. Some of the latest trends include money muling, where criminals harness channels like social media to recruit individuals to launder money through their bank accounts, and trade-based laundering, which allows bad actors to move funds across borders by exploiting international complexity. OSINT helps identify these emerging patterns, enabling earlier intervention and minimizing enforcement risks. ... When it comes to completing suspicious activity reports (SARs), many financial institutions rely on internal data, spending millions on transaction monitoring, for instance. While these investments are unquestionably necessary, external intelligence like OSINT is often neglected – despite it often being key to identifying bad actors and gaining a full picture of financial crime risk. 


The hard problem in data centres isn’t cooling or power – it’s people

Traditional infrastructure jobs no longer have the allure they once did, with Silicon Valley and startups capturing the imagination of young talent. Let’s be honest – it just isn’t seen as ‘sexy’ anymore. But while people dream about coding the next app, they forget someone has to build and maintain the physical networks that power everything. And that ‘someone’ is disappearing fast. Another factor is that the data centre sector hasn’t done a great job of telling its story. We’re seen as opaque, technical and behind closed doors. Most students don’t even know what a data centre is, and until something breaks  it doesn’t even register. That’s got to change. We need to reframe the narrative. Working in data centres isn’t about grey boxes and cabling. It’s about solving real-world problems that affect billions of people around the world, every single second of every day. ... Fixing the skills gap isn’t just about hiring more people. It’s about keeping the knowledge we already have in the industry and finding ways to pass it on. Right now, we’re on the verge of losing decades of expertise. Many of the engineers, designers and project leads who built today’s data centre infrastructure are approaching retirement. While projects operate at a huge scale and could appear exciting to new engineers, we also have inherent challenges that come with relatively new sectors. 


Multi-party computation is trending for digital ID privacy: Partisia explains why

The main idea is achieving fully decentralized data, even biometric information, giving individuals even more privacy. “We take their identity structure and we actually run the matching of the identity inside MPC,” he says. This means that neither Partisia nor the company that runs the structure has the full biometric information. They can match it without ever decrypting it, Bundgaard explains. Partisia says it’s getting close to this goal in its Japan experiment. The company has also been working on a similar goal of linking digital credentials to biometrics with U.S.-based Trust Stamp. But it is also developing other identity-related uses, such as proving age or other information. ... Multiparty computation protocols are closing that gap: Since all data is encrypted, no one learns anything they did not already know. Beyond protecting data, another advantage is that it still allows data analysts to run computations on encrypted data, according to Partisia. There may be another important role for this cryptographic technique when it comes to privacy. Blockchain and multiparty computation could potentially help lessen friction between European privacy standards, such as eIDAS and GDPR, and those of other countries. “I have one standard in Japan and I travel to Europe and there is a different standard,” says Bundgaard. 


MIT report misunderstood: Shadow AI economy booms while headlines cry failure

While headlines trumpet that “95% of generative AI pilots at companies are failing,” the report actually reveals something far more remarkable: the fastest and most successful enterprise technology adoption in corporate history is happening right under executives’ noses. ... The MIT researchers discovered what they call a “shadow AI economy” where workers use personal ChatGPT accounts, Claude subscriptions and other consumer tools to handle significant portions of their jobs. These employees aren’t just experimenting — they’re using AI “multiples times a day every day of their weekly workload,” the study found. ... Far from showing AI failure, the shadow economy reveals massive productivity gains that don’t appear in corporate metrics. Workers have solved integration challenges that stymie official initiatives, proving AI works when implemented correctly. “This shadow economy demonstrates that individuals can successfully cross the GenAI Divide when given access to flexible, responsive tools,” the report explains. Some companies have started paying attention: “Forward-thinking organizations are beginning to bridge this gap by learning from shadow usage and analyzing which personal tools deliver value before procuring enterprise alternatives.” The productivity gains are real and measurable, just hidden from traditional corporate accounting. 


The Price of Intelligence

Indirect prompt injection represents another significant vulnerability in LLMs. This phenomenon occurs when an LLM follows instructions embedded within the data rather than the user’s input. The implications of this vulnerability are far-reaching, potentially compromising data security, privacy, and the integrity of LLM-powered systems. At its core, indirect prompt injection exploits the LLM’s inability to consistently differentiate between content it should process passively (that is, data) and instructions it should follow. While LLMs have some inherent understanding of content boundaries based on their training, they are far from perfect. ... Jailbreaks represent another significant vulnerability in LLMs. This technique involves crafting user-controlled prompts that manipulate an LLM into violating its established guidelines, ethical constraints, or trained alignments. The implications of successful jailbreaks can potentially undermine the safety, reliability, and ethical use of AI systems. Intuitively, jailbreaks aim to narrow the gap between what the model is constrained to generate, because of factors such as alignment, and the full breadth of what it is technically able to produce. At their core, jailbreaks exploit the flexibility and contextual understanding capabilities of LLMs. While these models are typically designed with safeguards and ethical guidelines, their ability to adapt to various contexts and instructions can be turned against them.


The Strategic Transformation: When Bottom-Up Meets Top-Down Innovation

The most innovative organizations aren’t always purely top-down or bottom-up—they carefully orchestrate combinations of both. Strategic leadership provides direction and resources, while grassroots innovation offers practical insights and the capability to adapt rapidly. Chynoweth noted how strategic portfolio management helps companies “keep their investments in tech aligned to make sure they’re making the right investments.” The key is creating systems that can channel bottom-up innovations while ensuring they support the organization’s strategic objectives. Organizations that succeed in managing both top-down and bottom-up innovation typically have several characteristics. They establish clear strategic priorities from leadership while creating space for experimentation and adaptation. They implement systems for capturing and evaluating innovations regardless of their origin. And they create mechanisms for scaling successful pilots while maintaining strategic alignment. The future belongs to enterprises that can master this balance. Pure top-down enterprises will likely continue to struggle with implementation realities and changing market conditions. In contrast, pure bottom-up organizations would continue to lack the scale and coordination needed for significant impact.


Digital-first doesn’t mean disconnected for this CEO and founder

“Digital-first doesn’t mean disconnected – it means being intentional,” she said. For leaders it creates a culture where the people involved feel supported, wherever they’re working, she thinks. She adds that while many organisations found themselves in a situation where the pandemic forced them to establish a remote-first system, very few actually fully invested in making it work well. “High performance and innovation don’t happen in isolation,” said Feeney. “They happen when people feel connected, supported and inspired.” Sentiments which she explained are no longer nice to have, but are becoming a part of modern organisational infrastructure. One in which people are empowered to do their best work on their own terms. ... “One of the biggest challenges I have faced as a founder was learning to slow down, especially when eager to introduce innovation. Early on, I was keen to implement automation and technology, but I quickly realised that without reliable data and processes, these tools could not reach their full potential.” What she learned was, to do things correctly, you have to stop, review your foundations and processes and when you encounter an obstacle, deal with it, because though the stopping and starting might initially be frustrating, you can’t overestimate the importance of clean data, the right systems and personnel alignment with new tech.

Daily Tech Digest - June 05, 2025


Quote for the day:

"The greatest accomplishment is not in never falling, but in rising again after you fall." -- Vince Lombardi


Your Recovery Timeline Is a Lie: Why They Fall Apart

Teams assume they can pull snapshots from S3 or recover databases from a backup tool. What they don’t account for is the reconfiguration time required to stitch everything back together. ... RTOs need to be redefined through the lens of operational reality and validated through regular, full-system DR rehearsals. This is where IaC and automation come in. By codifying all layers of your infrastructure — not just compute and storage, but IAM, networking, observability and external dependencies, too — you gain the ability to version, test and rehearse your recovery plans. Tools like Terraform, Helm, OpenTofu and Crossplane allow you to build immutable blueprints of your infrastructure, which can be automatically redeployed in disaster scenarios. But codification alone isn’t enough. Continuous testing is critical. Just as CI/CD pipelines validate application changes, DR validation pipelines should simulate failover scenarios, verify dependency restoration and track real mean time to recovery (MTTR) metrics over time. ... It’s also time to stop relying on aspirational RTOs and instead measure actual MTTR. It’s what matters when things go wrong, indicating how long it really takes to go from incident to resolution. Unlike RTOs, which are often set arbitrarily, MTTR is a tangible, trackable indicator of resilience.


The Dawn of Unified DataOps—From Fragmentation to Transformation

Data management has traditionally been the responsibility of IT, creating a disconnect between this function and the business departments that own and understand the data’s value. This separation has resulted in limited access to unified data across the organization, including the tools and processes to leverage it outside of IT. ... Organizations looking to embrace DataOps and transform their approach to data must start by creating agile DataOps teams that leverage software-oriented methodologies; investing in data management solutions that leverage DataOps and data mesh concepts; investing in scalable automation and integration; and cultivating a data-driven culture. Much like agile software teams, it’s critical to include product management, domain experts, test engineers, and data engineers. Approach delivery iteratively, incrementally delivering MVPs, testing, and improving capabilities and quality. ... Technology alone won’t solve data challenges. Truly transformative DataOps strategies align with unified teams that pair business users and subject matter experts with DataOps professionals, forming a culture where collaboration, accessibility, and transparency are at the core of decision making.


Redefining Cyber Value: Why Business Impact Should Lead the Security Conversation

A BVA brings clarity to that timeline. It identifies the exposures most likely to prolong an incident and estimates the cost of that delay based on both your industry and organizational profile. It also helps evaluate the return of preemptive controls. For example, IBM found that companies that deploy effective automation and AI-based remediation see breach costs drop by as much as $2.2 million. Some organizations hesitate to act when the value isn't clearly defined. That delay has a cost. A BVA should include a "cost of doing nothing" model that estimates the monthly loss a company takes on by leaving exposures unaddressed. We've found that for a large enterprise, that cost can exceed half a million dollars. ... There's no question about how well security teams are doing the work. The issue is that traditional metrics don't always show what their work means. Patch counts and tool coverage aren't what boards care about. They want to know what's actually being protected. A BVA helps connect the dots – showing how day-to-day security efforts help the business avoid losses, save time, and stay more resilient. It also makes hard conversations easier. Whether it's justifying a budget, walking the board through risk, or answering questions from insurers, a BVA gives security leaders something solid to point to. 


Fake REAL Ids Have Already Arrived, Here’s How to Protect Your Business

When the REAL ID Act of 2005 was introduced, it promised to strengthen national security by setting higher standards for state-issued IDs, especially when it came to air travel, access to federal buildings, and more. Since then, the roll-out of the REAL ID program has faced delays, but with an impending enforcement deadline, many are questioning if REAL IDs deliver the level of security intended. ... While the original aim was to prevent another 9/11-style attack, over 20 years later, the focus has shifted to protecting against identity theft and illegal immigration. The final deadline to get your REAL ID is now May 7th, 2025, owing in part to differing opinions and adoption rates state-by-state which has dragged enforcement on for two decades.  ... The delays and staggered adoption has given bad actors the chance to create templates for fraudulent REAL IDs. Businesses may incorrectly assume that an ID bearing a REAL ID star symbol are more likely to be legitimate, but as our data proves, this is not the case. REAL IDs can be faked just as easily as any other identity document, putting the onus on businesses to implement robust ID verification methods to ensure they don’t fall victim to ID fraud. ... AI-powered identity verification is one of the only ways to combat the increasing use of AI-powered criminal tools. 


How this 'FinOps for AI' certification can help you tackle surging AI costs

To really adopt AI into your enterprise, we're talking about costs that are orders of magnitude greater. Companies are turning to FinOps for help dealing with this. FinOps, a portmanteau of Finance and DevOps, combines financial management and collaborative, agile IT operations into a discipline to manage costs. It started as a way to get a handle on cloud pricing. FinOps' first job is to optimize cloud spending and align cloud costs with business objectives. ... Today, they're adding AI spending to their concerns. According to the FinOps Foundation, 63% of FinOps practitioners are already being asked to manage AI costs, a number expected to rise as AI innovation continues to surge. Mismanagement of these costs can not only erode business value but also stifle innovation. "FinOps teams are being asked to manage accelerating AI spend to allocate its cost, forecast its growth, and ultimately show its value back to the business," said Storment. "But the speed and complexity of the data make this a moving target, and cost overruns in AI can slow innovation when not well managed." Besides, Storment added, C-level executives are asking that painful question: "You're using this AI service and spending too much. Do you know what it's for?" 


Tackling Business Loneliness

Leaders who intentionally reach out to their employees do more than combat loneliness; they directly influence performance and business success. "To lead effectively, you need to lead with care. Because care creates connection. Connection fuels commitment. And commitment drives results. It's in those moments of real connection that collective brilliance is unlocked," she concludes. ... But it's not just women, with many men facing isolation in the workplace too, especially where a culture of 'put up and shut up' is frequently seen. Reflected in the high prevalence of suicide in the UK construction industry, it is essential that toxic cultures are dismantled and all employees feel valued and part of the team. "Whether they work on site or remotely, full time or part time, building an inclusive culture helps to ensure people do not experience prolonged loneliness or lack of connection. When we prioritise inclusion, everyone benefits," Allen concludes. ... Providing a safe, non-judgemental space for employees to discuss loneliness, things that are troubling them, and ways to manage any negative feelings is crucial. "This could be with a trusted line manager or colleague, but objective support from professional therapists and counsellors should also be accessible to prevent loneliness from manifesting into more serious issues," she emphasises. 


Revolutionizing Software Development: Agile, Shift-Left, and Cybersecurity Integration

While shift-left may cost more resources in the short term, in most cases, the long-term savings more than make up for the initial investment. Bugs discovered after a product release can cost up to 640 times more than those caught during development. In addition, late detection can increase the risk of fines from security breaches, as well as causing damage to a brand’s trust. Automation tools are the primary answer to these concerns and are at the core of what makes shift-left possible. The popular tech industry mantra, “automate everything,” continues to apply. Static analysis, dynamic analysis, and software composition analysis tools scan for known vulnerabilities and common bugs, producing instant feedback as code is first merged into development branches. ... Shift-left balances speed with quality. Performing regular checks on code as it is written reduces the likelihood that significant defects and vulnerabilities will surface after a release. Once software is out in the wild, the cost to fix issues is much higher and requires extensively more work than catching them in the early phases. Despite the advantages of shift-left, navigating the required cultural change can be a challenge. As such, it’s crucial for developers to be set up for success with effective tools and proper guidance.


Feeling Reassured by Your Cybersecurity Measures?

Organizations must pursue a data-driven approach that embraces comprehensive NHI management. This approach, combined with robust Secrets Security Management, can ensure that none of your non-human identities become security weak points. Remember, feeling reassured about your cybersecurity measures is not just about having security systems in place, but also about knowing how to manage them effectively. Effective NHI management will be a cornerstone in instilling peace of mind and enhancing security confidence. With these insights into the strategic importance of NHI management in promoting cybersecurity confidence, organizations can take a step closer to feeling reassured by their cybersecurity measures. ... Imagine a simple key, one that turns tumblers in the lock mechanism but isn’t alone in doing so. There are other keys that fit the same lock, and they all have the power to unlock the same door. This is similar to an NHI and its associated secret. There are numerous NHIs that could access the same system or part of a system, granted via their unique ‘Secret’. Now, here’s where it gets a little complex. ... Just as a busy airport needs security checkpoints to screen passengers and verify their credentials, a robust NHI management system is needed to accurately identify and manage all NHIs. 


How to Capitalize on Software Defined Storage, Securely and Compliantly

Because it fundamentally transforms data infrastructure, SDS is critical for technology executives to understand and capitalize on. It not only provides substantial cost savings and predictability and while reducing staff time required for managing physical hardware; SDS also makes companies much more agile and flexible in their business operations. For example, launching new initiatives or products that can start small and quickly scale is much easier with SDS. As a result, SDS does not just impact IT, it is a critical function across the enterprise. Software-defined storage in the cloud has brought major operational and cost benefits for enterprises. First, subscription business models enable buyers to make much more cost-conscious decisions and avoid wasting resources and usage. ... In addition, software-defined storage has also transformed technology management frameworks. SDS has enabled a move to agile DevOps, which includes real-time analytics resulting in faster iteration, less downtime and more efficient resource allocation. With real-time dashboards and alerts, organizations can now track key KPIs such as uptime and performance and react instantly. IT management can be more proactive by increasing storage or resource capacity when needed, rather than waiting for a crash to react.


The habits that set future-ready IT leaders apart

Constructive discomfort is the impetus to continuous learning, adaptability, agility, and anti-fragility. The concept of anti-fragile means designed for change. How do we build anti-fragile humans so they are unbreakable and prepared for tomorrow’s world, whatever it brings? We have these fault-tolerant designs where I can unplug a server and the system adapts and you don’t even know it. We want to create that same anti-fragility and fault tolerance in the human beings we train. We’re living in this ever-changing, accelerating VUCA [volatile, uncertain, complex, ambiguous] world, and there are two responses when you are presented with the unknown or the unexpected: You can freeze and be fearful and have it overcome you, or you can improvise, adapt, and overcome it by being a continuous learner and continuous adapter. I think resiliency in human beings is driven by this constructive discomfort, which creates a path to being continuous learners and continuous adapters. ... Strategic competence is knowing what hill to take, tactical competence is knowing how to take that hill safely, and technical competence is rolling up your sleeves and helping along the way. The leaders I admire have all three. The person who doesn’t have technical competence may set forth an objective and even chart the path to get there, but then they go have coffee. That leader is probably not going to do well. 

Daily Tech Digest - April 12, 2025


Quote for the day:

"Good management is the art of making problems so interesting and their solutions so constructive that everyone wants to get to work and deal with them." -- Paul Hawken


Financial Fraud, With a Third-Party Twist, Dominates Cyber Claims

Data on the most significant threats and what technologies and processes can have the greatest preventative impact on those threats are extremely valuable, says Andrew Braunberg, principal analyst at business intelligence firm Omdia. "It's great data for the enterprise, no question about it — that kind of data is going to be more and more useful for folks," he says. "As insurers figure out how to collect more standardized data, and more comprehensive data, at a quicker cadence — that's good news." ... While most companies do not consider their cyber-insurance provider as a security adviser, they do make decisions based on the premiums presented to them, says Omdia's Braunberg. And many companies seem ready to rely on insurers more. "Nobody really thought of these guys as security advisors that they should really be turning to, but if that shift happens, then I think the question gets a lot more interesting," he says. "Companies may have these annual sit-downs with their insurers where you really walk through this data and decide what kind of investments to make — and that's a different world than the way most security investment decisions are done today." The fact that cyber insurers are moving into an advisory role may be good news, considering the US government's pullback from aiding enterprises with cybersecurity, says At-Bay's Tyra. 


How to Handle a Talented, Yet Quirky, IT Team Member

Balance respect for individuality with the needs of the team and organization. By valuing their quirks as part of their creative process, you'll foster a sense of belonging and loyalty, Honnenahalli says. "Clear boundaries and open communication will prevent potential misunderstandings, ensuring harmony within the team." ... Leaders should aim to channel quirkiness constructively rather than working to eliminate it. For instance, if a quirky habit is distracting or counterproductive, the team leader can guide the individual toward alternatives that achieve similar results without causing friction, Honnenahalli says. Avoid suppressing individuality unless it directly conflicts with professional responsibilities or team cohesion. Help the unconventional team member channel their quirks productively rather than trying to reduce them, Xu suggests. "This means offering support and guidance in ways that allow them to thrive within the structure of the team." Remember that quirks can often be a unique asset in problem-solving and innovation. ... In IT, where innovation thrives on diverse perspectives, quirky team members often deliver creative solutions and unconventional thinking, Honnenahalli says. "Leaders who manage such individuals effectively can cultivate a culture of innovation and inclusivity, boosting morale and productivity."


A Guide to Managing Machine Identities

Limited visibility into highly fragmented machine identities makes them difficult to manage and secure. According to CyberArk's 2024 Identity Security Threat Landscape Report - a global survey of 2,400 security decision-makers across 18 countries - 93% of organizations experienced two or more identity-related breaches in 2023. Machine identities are a frequent target, with previous CyberArk research indicating that two-thirds of organizations have access to sensitive data. A ransomware attack on a popular file transfer system last year exposed the sensitive information of approximately 60 million individuals and impacted more than 2,000 public and private sector organizations. ... To address the challenges associated with managing fragmented machine identities, CyberArk Secrets Hub and CyberArk Cloud Visibility can help standardize and automate operational processes. These tools provide better visibility into identities that require access and determine whether the request is legitimate. ... Organizations should identify and secure their machine identities across multiple on-premises and cloud environments, including those from different cloud service providers. The right governance tool can help organizations meet the unique needs of each platform, while also making it easier to maintain a unified approach to machine identity management.


7 strategic insights business and IT leaders need for AI transformation in 2025

AI innovation continues rapidly, but enterprises must distinguish between practical AI that delivers tangible ROI and aspirational solutions that lack immediate business value. Practical AI enhances agent productivity, reduces handle times, and personalizes customer interactions in ways that directly impact revenue and operational efficiency. Business leaders must challenge vendors to demonstrate clear business cases, ensuring AI investments align with specific organizational objectives rather than speculative, unproven technology. Also, every AI initiative must have a roadmap with clearly defined focus areas and milestones. ... Enterprises now generate vast amounts of interaction data, but the true competitive advantage sits with AI-powered analytics. Real-time sentiment analysis, predictive modeling, and conversational intelligence redefine how organizations measure and optimize performance across customer-facing and internal communications. Companies that harness these insights can proactively address customer needs, optimize workforce performance, and drive data-driven decision-making -- at scale. ... Automation is no longer just a convenience but a necessity for streamlining complex business processes and enhancing customer journeys.


Bryson Bort on Cyber Entrepreneurship and the Needed Focus on Critical Infrastructure

Most people only know industrial control systems as “Stuxnet” and, even then, with a limited idea of what exactly that means. These are the computers that run critical infrastructure, manufacturing plants, and dialysis machines in hospitals. A bad day with normal computers means ransomware where a business can’t run, espionage where a company loses valuable data, or a regular person getting scammed out of their bank account. All pretty bad, but at least everyone is still breathing. With ICS, a bad day can mean loss of life or limb and that’s just at the point of use. The downstream effects of water or electricity being disrupted sends us to the Stone Ages immediately and there is a direct correlation to loss of life in those scenarios. ... As an entrepreneur, it’s the same and the Law of N is the variable number of people that you can lead where you personally have a visible impact on their daily requirements. The second you hit N+1, it is another leader below you in the chain who now has that impact. In summary: 1) you can’t do it alone, being an individual contributor (no matter how talented) is never going to be as impactful as a squad/team; 2) the structure you build is going to dictate the success or failure of the execution of your ideas; and 3) you have leadership limits of what you can control.


Rethinking talent strategy: What happens when you merge performance with development

Often, performance and development live on different systems, with no unified view of progress, potential, or skill gaps. Without a continuous data loop, talent teams struggle to design meaningful interventions, and line managers lack the insight to support growth conversations effectively. The result? Employee development efforts become reactive, generic, and in many cases, ineffective. But the problem isn’t just technical. According to Mohit Sharma, CHRO at EKA Mobility, there’s a strategic imbalance in focus. “Performance management often prioritises business metrics—financials, customer outcomes, process efficiency—while people-related goals receive less attention,” he says. “This naturally sidelines employee development.” And when development is treated as an afterthought, Individual Development Plans (IDPs) become little more than checkboxes. “The IDP often runs as a standalone activity, disconnected from performance outcomes,” Sharma adds. “This fragmentation means development doesn’t feed into performance—and vice versa.” Moreover, most organisations struggle with systematic skill-gap identification. In fast-changing industries, capability needs evolve every quarter. 


How cybercriminals are using AI to power up ransomware attacks

Ransomware gangs are increasingly deploying AI across every stage of their operations, from initial research to payload deployment and negotiations. Smaller outfits can punch well above their weight in terms of scale and sophistication, while more established groups are transforming into fully automated extortion machines. As new gangs emerge, evolve and adapt to boost their chances of success, here we explore the AI-driven tactics that are reshaping ransomware as we know it. Cybercriminal groups will typically pursue the path of least resistance to making a profit. As such, most cases of malign AI have been lower hanging fruit focusing on automating existing processes. That said, there is also a significant risk of more tech-savvy groups using AI to enhance the effectiveness of the malware itself. Perhaps the most dangerous example is polymorphic ransomware, which uses AI to mutate its code in real time. Each time the malware infects a new system, it rewrites itself, making detection far more difficult as it evades antivirus and endpoint security looking for specific signatures. Self-learning capabilities and independent adaptability are drastically increasing the chances of ransomware reaching critical systems and propagating before it can be detected and shut down.


IBM Quantum CTO Says Codes And Commitment Are Critical For Hitting Quantum Roadmap Goals

The technique — called the Gross code — shrinks the number of physical qubits required to produce stable output, significantly easing the engineering burden, according to R&D World. “The Gross code bought us two really big things,” Oliver Dial, IBM Quantum’s chief technology officer, said in an interview with R&D World. “One is a 10-fold reduction in the number of physical qubits needed per logical qubit compared to typical surface code estimates.” ... IBM’s optimism is grounded not just in long-term error correction, but in near-term tactics like error mitigation, a strategy to extract meaningful results from today’s imperfect machines. These techniques offer a way to recover accurate answers from computers that commit errors, Dial told R&D World. He sees this as a bridge between today’s noisy intermediate-scale quantum (NISQ) machines and tomorrow’s fully fault-tolerant quantum computers. Competitors are also racing to prove real-world use cases. Google has published recent results in quantum error correction, while Quantinuum and JPMorgan Chase are exploring secure applications like random number generation, R&D World points out. IBM’s bet is that better codes, especially its low-density parity check (LDPC) approach refined through the Gross code, will accelerate real deployments.


Defining leadership through mentorship and a strong network

While it’s a challenge to schedule a time each month that works for everyone, she says, there’s a lot of value in them to build strong team camaraderie. It’s also helped everyone better understand diverse backgrounds, what everyone’s contributing, and how the team can lean into those strengths and overcome challenges. ... While she wasn’t sure how it would land, it grabbed the attention of the CIO, who had never seen this approach before, and opened the dialogue for Schulze to be a candidate. She decided to push past any insecurities or fears, and go for a position she didn’t necessarily feel totally qualified for, but ended up landing the job. Schulze knows not everyone feels comfortable stepping out of their comfort zone, but as a leader, she wants to set that example for her employees. She identifies opportunities for growth and advancement, regardless of background or experience, and helps them tap into their potential. She understands it’s difficult for women to break through the boys club mentality that can exist in tech, and the challenge to fight stereotypes around women in IT and STEM careers. In her own career, Schulze had to apply herself extra hard to prove her worth and value, even when she had the same answers as her male counterparts.


Cracking the Code on Cybersecurity ROI

Quantifying the total cost of cybersecurity investments — which have long been at the top of most companies' IT spending priorities — is easy enough. It entails adding up the cost of the hardware resources, software tools, and personnel (including both internal employees as well as any outsourced cybersecurity services) that an organization deploys to mitigate security risks. But determining how much value those investments yield is where things get tricky. This is primarily because, again, the goal of cybersecurity investments is to prevent breaches from occurring — and when no breach occurs, there is no quantifiable cost to measure. ... Rather than estimating breach frequency and cost based on historical data specific to your business, you could look at data about current cybersecurity trends for other companies similar to yours, considering factors like their region, the type of industry they operate in, and their size. This data provides insight into how likely your type of business will experience a breach and what that breach will likely cost. ... A third approach is to measure cybersecurity ROI in terms of the value you don't create due to breaches that do occur. This is effectively an inverse form of cybersecurity ROI. ... Using this data, you can predict how much money you'd save through additional cybersecurity spending.