Showing posts with label cryptocurrency. Show all posts
Showing posts with label cryptocurrency. Show all posts

Daily Tech Digest - August 11, 2025


Quote for the day:

"Leadership is absolutely about inspiring action, but it is also about guarding against mis-action." -- Simon Sinek


Attackers Target the Foundations of Crypto: Smart Contracts

Central to the attack is a malicious smart contract, written in the Solidity programming language, with obfuscated functionality that transfers stolen funds to a hidden externally owned account (EOA), says Alex Delamotte, the senior threat researcher with SentinelOne who wrote the analysis. ... The decentralized finance (DeFi) ecosystem relies on smart contracts — as well as other technologies such as blockchains, oracles, and key management — to execute transactions, manage data on a blockchain, and allow for agreements between different parties and intermediaries. Yet their linchpin status also makes smart contracts a focus of attacks and a key component of fraud. "A single vulnerability in a smart contract can result in the irreversible loss of funds or assets," Shashank says. "In the DeFi space, even minor mistakes can have catastrophic financial consequences. However, the danger doesn’t stop at monetary losses — reputational damage can be equally, if not more, damaging." ... Companies should take stock of all smart contracts by maintaining a detailed and up-to-date record of all deployed smart contracts, verifying every contract, and conducting periodic audits. Real-time monitoring of smart contracts and transactions can detect anomalies and provide fast response to any potential attack, says CredShields' Shashank.


Is AI the end of IT as we know it?

CIOs have always been challenged by the time, skills, and complexities involved in running IT operations. Cloud computing, low-code development platforms, and many DevOps practices helped IT teams move “up stack,” away from the ones and zeros, to higher-level tasks. Now the question is whether AI will free CIOs and IT to focus more on where AI can deliver business value, instead of developing and supporting the underlying technologies. ... Joe Puglisi, growth strategist and fractional CIO at 10xnewco, offered this pragmatic advice: “I think back to the days when you wrote an assembly and it took a lot of time. We introduced compilers, higher-level languages, and now we have AI that can write code. This is a natural progression of capabilities and not the end of programming.” The paradigm shift suggests CIOs will have to revisit their software development lifecycles for significant shifts in skills, practices, and tools. “AI won’t replace agile or DevOps — it’ll supercharge them with standups becoming data-driven, CI/CD pipelines self-optimizing, and QA leaning on AI for test creation and coverage,” says Dominik Angerer, CEO of Storyblok. “Developers shift from coding to curating, business users will describe ideas in natural language, and AI will build functional prototypes instantly. This democratization of development brings more voices into the software process while pushing IT to focus on oversight, scalability, and compliance.”


From Indicators to Insights: Automating Risk Amplification to Strengthen Security Posture

Security analysts don’t want more alerts. They want more relevant ones. Traditional SIEMs generate events using their own internal language that involve things like MITRE tags, rule names and severity scores. But what frontline responders really want to know is which users, systems, or cloud resources are most at risk right now. That’s why contextual risk modeling matters. Instead of alerting on abstract events, modern detection should aggregate risk around assets including users, endpoints, APIs, or services. This shifts the SOC conversation from “What alert fired?” to “Which assets should I care about today?” ... The burden of alert fatigue isn’t just operational but also emotional. Analysts spend hours chasing shadows, pivoting across tools, chasing one-off indicators that lead nowhere. When everything is an anomaly, nothing is actionable. Risk amplification offers a way to reduce the unseen yet heavy weight on security analysts and the emotional toll it can take by aligning high-risk signals to high-value assets and surfacing insights only when multiple forms of evidence converge. Rather than relying on a single failed login or endpoint alert, analysts can correlate chains of activity whether they be login anomalies, suspicious API queries, lateral movement, or outbound data flows – all of which together paint a much stronger picture of risk.


The Immune System of Software: Can Biology Illuminate Testing?

In software engineering, quality assurance is often framed as identifying bugs, validating outputs, and confirming expected behaviour. But similar to immunology, software testing is much more than verification. It is the process of defining the boundaries of the system, training it to resist failure, and learning from its past weaknesses. Like the immune system, software testing should be multi-layered, adaptive, and capable of evolving over time. ... Just as innate immunity is present from biological birth, unit tests should be present from the birth of our code. Just as innate immunity doesn't need a full diagnostic history to act, unit tests don’t require a full system context. They work in isolation, making them highly efficient. But they also have limits: they can't catch integration issues or logic bugs that emerge from component interactions. That role belongs to more evolved layers. ... Negative testing isn’t about proving what a system can do — it’s about ensuring the system doesn’t do what it must never do. It verifies how the software behaves when exposed to invalid input, unauthorized access, or unexpected data structures. It asks: Does the system fail gracefully? Does it reject the bad while still functioning with the good? Just as an autoimmune disease results from a misrecognition of the self, software bugs often arise when we misrecognise what our code should do and what it should not do.


CSO hiring on the rise: How to land a top security exec role

“Boards want leaders who can manage risk and reputation, which has made soft skills — such as media handling, crisis communication, and board or financial fluency — nearly as critical as technical depth,” Breckenridge explains. ... “Organizations are seeking cybersecurity leaders who combine technical depth, AI fluency, and strong interpersonal skills,” Fuller says. “AI literacy is now a baseline expectation, as CISOs must understand how to defend against AI-driven threats and manage governance frameworks.” ... Offers of top pay and authority to CSO candidates obviously come with high expectations. Organizations are looking for CSOs with a strong blend of technical expertise, business acumen, and interpersonal strength, Fuller says. Key skills include cloud security, identity and access management (IAM), AI governance, and incident response planning. Beyond technical skills, “power skills” such as communication, creativity, and problem-solving are increasingly valued, Fuller explains. “The ability to translate complex risks into business language and influence board-level decisions is a major differentiator. Traits such as resilience, adaptability, and ethical leadership are essential — not only for managing crises but also for building trust and fostering a culture of security across the enterprise,” he says.


From legacy to SaaS: Why complexity is the enemy of enterprise security

By modernizing, i.e., moving applications to a more SaaS-like consumption model, the network perimeter and associated on-prem complexity tends to dissipate, which is actually a good thing, as it makes ZTNA easier to implement. As the main entry point into an organization’s IT system becomes the web application URL (and browser), this reduces attackers’ opportunities and forces them to focus on the identity layer, subverting authentication, phishing, etc. Of course, a higher degree of trust has to be placed (and tolerated) in SaaS providers, but at least we now have clear guidance on what to look for when transitioning to SaaS and cloud: identity protection, MFA, and phishing-resistant authentication mechanisms become critical—and these are often enforced by default or at least much easier to implement compared to traditional systems. ... The unwillingness to simplify technology stack by moving to SaaS is then combined with a reluctant and forced move to the cloud for some applications, usually dictated by business priorities or even ransomware attacks (as in the BL case above). This is a toxic mix which increases complexity and reduces the ability for a resource-constrained organization to keep security risks at bay.


Why Metadata Is the New Interface Between IT and AI

A looming risk in enterprise AI today is using the wrong data or proprietary data in AI data pipelines. This may include feeding internal drafts to a public chatbot, training models on outdated or duplicate data, or using sensitive files containing employee, customer, financial or IP data. The implications range from wasted resources to data breaches and reputational damage. A comprehensive metadata management strategy for unstructured data can mitigate these risks by acting as a gatekeeper for AI workflows. For example, if a company wants to train a model to answer customer questions in a chatbot, metadata can be used to exclude internal files, non-final versions, or documents marked as confidential. Only the vetted, tagged, and appropriate content is passed through for embedding and inference. This is a more intelligent, nuanced approach than simply dumping all available files into an AI pipeline. With rich metadata in place, organizations can filter, sort, and segment data based on business requirements, project scope, or risk level. Metadata augments vector labeling for AI inferencing. A metadata management system helps users discover which files to feed the AI tool, such as health benefits documents in an HR chatbot while vector labeling gives deeper information as to what’s in each document.


Ask a Data Ethicist: What Should You Know About De-Identifying Data?

Simply put, data de-identification is removing or obscuring details from a dataset in order to preserve privacy. We can think about de-identification as existing on a continuum... Pseudonymization is the application of different techniques to obscure the information, but allows it to be accessed when another piece of information (key) is applied. In the above example, the identity number might unlock the full details – Joe Blogs of 123 Meadow Drive, Moab UT. Pseudonymization retains the utility of the data while affording a certain level of privacy. It should be noted that while the terms anonymize or anonymization are widely used – including in regulations – some feel it is not really possible to fully anonymize data, as there is always a non-zero chance of reidentification. Yet, taking reasonable steps on the de-identification continuum is an important part of compliance with requirements that call for the protection of personal data. There are many different articles and resources that discuss a wide variety of types of de-identification techniques and the merits of various approaches ranging from simple masking techniques to more sophisticated types of encryption. The objective is to strike a balance between the complexity of the the technique to ensure sufficient protection, while not being burdensome to implement and maintain.


5 ways business leaders can transform workplace culture - and it starts by listening

Antony Hausdoerfer, group CIO at auto breakdown specialist The AA, said effective leaders recognize that other people will challenge established ways of working. Hearing these opinions comes with an open management approach. "You need to ensure that you're humble in listening, but then able to make decisions, commit, and act," he said. "Effective listening is about managing with humility with commitment, and that's something we've been very focused on recently." Hausdoerfer told ZDNET how that process works in his IT organization. "I don't know the answer to everything," he said. "In fact, I don't know the answer to many things, but my team does, and by listening to them, we'll probably get the best outcome. Then we commit to act." ... Bev White, CEO at technology and talent solutions provider Nash Squared, said open ears are a key attribute for successful executives. "There are times to speak and times to listen -- good leaders recognize which is which," she said. "The more you listen, the more you will understand how people are really thinking and feeling -- and with so many great people in any business, you're also sure to pick up new information, deepen your understanding of certain issues, and gain key insights you need."


Beyond Efficiency: AI's role in reshaping work and reimagining impact

The workplace of the future is not about humans versus machines; it's about humans working alongside machines. AI's real value lies in augmentation: enabling people to do more, do better, and do what truly matters. Take recruitment, for example. Traditionally time-intensive and often vulnerable to unconscious bias, hiring is being reimagined through AI. Today, organisations can deploy AI to analyse vast talent pools, match skills to roles with precision, and screen candidates based on objective data. This not only reduces time-to-hire but also supports inclusive hiring practices by mitigating biases in decision-making. In fact, across the employee lifecycle, it personalises experiences at scale. From career development tools that recommend roles and learning paths aligned with individual aspirations, to chatbots that provide real-time HR support, AI makes the employee journey more intuitive, proactive, and empowering. ... AI is not without its challenges. As with any transformative technology, its success hinges on responsible deployment. This includes robust governance, transparency, and a commitment to fairness and inclusion. Diversity must be built into the AI lifecycle, from the data it's trained on to the algorithms that guide its decisions. 

Daily Tech Digest - July 22, 2025


Quote for the day:

“Being responsible sometimes means pissing people off.” -- Colin Powell


It might be time for IT to consider AI models that don’t steal

One option that has many pros and cons is to use genAI models that explicitly avoid training on any information that is legally dicey. There are a handful of university-led initiatives that say they try to limit model training data to information that is legally in the clear, such as open source or public domain material. ... “Is it practical to replace the leading models of today right now? No. But that is not the point. This level of quality was built on just 32 ethical data sources. There are millions more that can be used,” Wiggins wrote in response to a reader’s comment on his post. “This is a baseline that proves that Big AI lied. Efforts are underway to add more data that will bring it up to more competitive levels. It is not there yet.” Still, enterprises are investing in and planning for genAI deployments for the long term, and they may find in time that ethically sourced models deliver both safety and performance. ... Tipping the scales in the other direction is the big model makers’ promises of indemnification. Some genAI vendors have said they will cover the legal costs for customers who are sued over content produced by their models. “If the model provides indemnification, this is what enterprises should shoot for,” Moor’s Andersen said. 


The unique, mathematical shortcuts language models use to predict dynamic scenarios

One go-to pattern the team observed, called the “Associative Algorithm,” essentially organizes nearby steps into groups and then calculates a final guess. You can think of this process as being structured like a tree, where the initial numerical arrangement is the “root.” As you move up the tree, adjacent steps are grouped into different branches and multiplied together. At the top of the tree is the final combination of numbers, computed by multiplying each resulting sequence on the branches together. The other way language models guessed the final permutation was through a crafty mechanism called the “Parity-Associative Algorithm,” which essentially whittles down options before grouping them. It determines whether the final arrangement is the result of an even or odd number of rearrangements of individual digits. ... “These behaviors tell us that transformers perform simulation by associative scan. Instead of following state changes step-by-step, the models organize them into hierarchies,” says MIT PhD student and CSAIL affiliate Belinda Li SM ’23, a lead author on the paper. “How do we encourage transformers to learn better state tracking? Instead of imposing that these systems form inferences about data in a human-like, sequential way, perhaps we should cater to the approaches they naturally use when tracking state changes.”


Role of AI in fortifying cryptocurrency security

In the rapidly expanding realm of Decentralised Finance (DeFi), AI will play a critical role in optimising complex lending, borrowing, and trading protocols. AI can intelligently manage liquidity pools, optimise yield farming strategies for better returns and reduced impermanent loss, and even identify subtle arbitrage opportunities across various platforms. Crucially, AI will also be vital in identifying and mitigating novel types of exploits that are unique to the intricate and interconnected world of DeFi. Looking further ahead, AI will be crucial in developing Quantum-Resistant Cryptography. As quantum computing advances, it poses a theoretical threat to the underlying cryptographic methods that secure current blockchain networks. AI can significantly accelerate the research and development of “post-quantum cryptography” (PQC) algorithms, which are designed to withstand the immense computational power of future quantum computers. AI can also be used to simulate quantum attacks, rigorously testing existing and new cryptographic designs for vulnerabilities. Finally, the concept of Autonomous Regulation could redefine oversight in the crypto space. Instead of traditional, reactive regulatory approaches, AI-driven frameworks could provide real-time, proactive oversight without stifling innovation. 


From Visibility to Action: Why CTEM Is Essential for Modern Cybersecurity Resilience

CTEM shifts the focus from managing IT vulnerabilities in isolation to managing exposure in collaboration, something that’s far more aligned with the operational priorities of today’s organizations. Where traditional approaches center around known vulnerabilities and technical severity, CTEM introduces a more business-driven lens. It demands ongoing visibility, context-rich prioritization, and a tighter alignment between security efforts and organizational impact. In doing so, it moves the conversation from “What’s vulnerable?” to “What actually matters right now?” – a far more useful question when resilience is on the line. What makes CTEM particularly relevant beyond security teams is its emphasis on continuous alignment between exposure data and operational decision-making. This makes it valuable not just for threat reduction, but for supporting broader resilience efforts, ensuring resources are directed toward the exposures most likely to disrupt critical operations. It also complements, rather than replaces, existing practices like attack surface management (ASM). CTEM builds on these foundations with more structured prioritization, validation, and mobilization, turning visibility into actionable risk reduction. 


Driving Platform Adoption: Community Is Your Value

Remember that in a Platform as a Product approach, developers are your customers. If they don’t know what’s available, how to use it or what’s coming next, they’ll find workarounds. These conferences and speaker series are a way to keep developers engaged, improve adoption and ensure the platform stays relevant.There’s a human side to this, too often left out of focusing on “the business value” and outcomes in corporate-land: just having a friendly community of humans who like to spend time with each other and learn. ... Successful platform teams have active platform advocacy. This requires at least one person working full time to essentially build empathy with your users by working with and listening to the people who use your platforms. You may start with just one platform advocate who visits with developer teams, listening for feedback while teaching them how to use the platform and associated methodologies. The advocate acts as both a councilor and delegate for your developers.  ... The journey to successful platform adoption is more than just communicating technical prowess. Embracing systematic approaches to platform marketing that include clear messaging and positioning based on customers’ needs and a strong brand ethos is the key to communicating the value of your platform.


9 AI development skills tech companies want

“It’s not enough to know how a transformer model works; what matters is knowing when and why to use AI to drive business outcomes,” says Scott Weller, CTO of AI-powered credit risk analysis platform EnFi. “Developers need to understand the tradeoffs between heuristics, traditional software, and machine learning, as well as how to embed AI in workflows in ways that are practical, measurable, and responsible.” ... “In AI-first systems, data is the product,” Weller says. “Developers must be comfortable acquiring, cleaning, labeling, and analyzing data, because poor data hygiene leads to poor model performance.” ... AI safety and reliability engineering “looks at the zero-tolerance safety environment of factory operations, where AI failures could cause safety incidents or production shutdowns,” Miller says. To ensure the trust of its customers, IFS needs developers who can build comprehensive monitoring systems to detect when AI predictions become unreliable and implement automated rollback mechanisms to traditional control methods when needed, Miller says. ... “With the rapid growth of large language models, developers now require a deep understanding of prompt design, effective management of context windows, and seamless integration with LLM APIs—skills that extend well beyond basic ChatGPT interactions,” Tupe says.


Why AI-Driven Logistics and Supply Chains Need Resilient, Always-On Networks

Something worth noting about increased AI usage in supply chains is that as AI-enabled systems become more complex, they also become more delicate, which increases the potential for outages. Something as simple as a single misconfiguration or unintentional interaction between automated security gates can lead to a network outage, preventing supply chain personnel from accessing critical AI applications. During an outage, AI clusters (interconnected GPU/TPU nodes used for training and inference) can also become unavailable. .. Businesses must increase network resiliency to ensure their supply chain and logistics teams always have access to key AI applications, even during network outages and other disruptions. One approach that companies can take to strengthen network resilience is to implement purpose-built infrastructure like out of band (OOB) management. With OOB management, network administrators can separate and containerize functions of the management plane, allowing it to operate freely from the primary in-band network. This secondary network acts as an always-available, independent, dedicated channel that administrators can use to remotely access, manage, and troubleshoot network infrastructure.


From architecture to AI: Building future-ready data centers

In some cases, the pace of change is so fast that buildings are being retrofitted even as they are being constructed. Once CPUs are installed, O'Rourke has observed data center owners opting to upgrade racks row by row, rather than converting the entire facility to liquid cooling at once – largely because the building wasn’t originally designed to support higher-density racks. To accommodate this reality, Tate carries out in-row upgrades by providing specialized structures to mount manifolds, which distribute coolant from air-cooled chillers throughout the data halls. “Our role is to support the physical distribution of that cooling infrastructure,” explains O'Rourke. “Manifold systems can’t be supported by existing ceilings or hot aisle containment due to weight limits, so we’ve developed floor-mounted frameworks to hold them.” He adds: “GPU racks also can’t replace all CPU racks one-to-one, as the building structure often can’t support the added load. Instead, GPUs must be strategically placed, and we’ve created solutions to support these selective upgrades.” By designing manifold systems with actuators that integrate with the building management system (BMS), along with compatible hot aisle containment and ceiling structures, Tate has developed a seamless, integrated solution for the white space. 


Weaving reality or warping it? The personalization trap in AI systems

At first, personalization was a way to improve “stickiness” by keeping users engaged longer, returning more often and interacting more deeply with a site or service. Recommendation engines, tailored ads and curated feeds were all designed to keep our attention just a little longer, perhaps to entertain but often to move us to purchase a product. But over time, the goal has expanded. Personalization is no longer just about what holds us. It is what it knows about each of us, the dynamic graph of our preferences, beliefs and behaviors that becomes more refined with every interaction. Today’s AI systems do not merely predict our preferences. They aim to create a bond through highly personalized interactions and responses, creating a sense that the AI system understands and cares about the user and supports their uniqueness. The tone of a chatbot, the pacing of a reply and the emotional valence of a suggestion are calibrated not only for efficiency but for resonance, pointing toward a more helpful era of technology. It should not be surprising that some people have even fallen in love and married their bots. The machine adapts not just to what we click on, but to who we appear to be. It reflects us back to ourselves in ways that feel intimate, even empathic. 


Microsoft Rushes to Stop Hackers from Wreaking Global Havoc

Multiple different hackers are launching attacks through the Microsoft vulnerability, according to representatives of two cybersecurity firms, CrowdStrike Holdings, Inc. and Google's Mandiant Consulting. Hackers have already used the flaw to break into the systems of national governments in Europe and the Middle East, according to a person familiar with the matter. In the US, they've accessed government systems, including ones belonging to the US Department of Education, Florida's Department of Revenue and the Rhode Island General Assembly, said the person, who spoke on condition that they not be identified discussing the sensitive information. ... The breaches have drawn new scrutiny to Microsoft's efforts to shore up its cybersecurity after a series of high-profile failures. The firm has hired executives from places like the US government and holds weekly meetings with senior executives to make its software more resilient. The company's tech has been subject to several widespread and damaging hacks in recent years, and a 2024 US government report described the company's security culture as in need of urgent reforms. ... "There were ways around the patches," which enabled hackers to break into SharePoint servers by tapping into similar vulnerabilities, said Bernard. "That allowed these attacks to happen." 

Daily Tech Digest - April 15, 2025


Quote for the day:

“Become the kind of leader that people would follow voluntarily, even if you had no title or position.” -- Brian Tracy



Critical Thinking In The Age Of AI-Generated Code

Besides understanding our code, code reviewing AI-generated code is an invaluable skill nowadays. Tools like GitHub's Copilot and DeepCode can code-review better than a junior software developer. Depending on the complexity of the codebase, they can save us time in code reviewing and pinpoint cases that we may have missed, but, after all, they are not flawless. We still need to verify that the AI assistant's code review did not provide any false positives or false negatives. We need to verify that the code review did not miss anything important and that the AI assistant got the context correctly. The hybrid approach seems to be the most effective one: let AI handle the grunt work and rely on developers for the critical analysis. ... After all, code reviewing AI-generated code is an excellent opportunity to educate ourselves while improving our code-reviewing skills. Keep in mind that, to date, AI-generated code optimizes for patterns in its training data. This may not be aligned with coding first principles. AI-generated code may follow templated solutions rather than custom designs. It may include unnecessary defensive code or overly generic implementations. We need to check that it has chosen the most appropriate solution for each code block generated. Another common problem is that LLMs may hallucinate.


DeepCoder: Revolutionizing Software Development with Open-Source AI

One of the DeepCoder project’s most significant contributions is the introduction of verl-pipeline, an optimized extension of the very open-source RLHF library. The team identified sampling, the generation of long token sequences as the primary bottleneck in training and developed “one-off pipelining” to address this challenge. This technique overlaps sampling, reward calculation and training, reducing end-to-end training times by up to 2.5x. This optimization is game-changing for coding tasks requiring thousands of unit tests per reinforcement learning iteration, making previously prohibitive training runs accessible to smaller research teams and independent developers. For DevOps professionals, DeepCoder represents an opportunity to integrate advanced code generation directly into CI/CD pipelines without dependency on API-gated services. Teams can fine-tune the model on their codebase, creating customized assistants that understand their specific architecture and coding patterns. ... DeepCoder’s open-source nature aligns with the DevOps collaboration and shared improvement philosophy. As more organizations adopt and contribute to the model, we can expect to see specialized versions emerge for different programming languages and problem domains.


Transforming Software Development

AI assistants are getting smarter, moving beyond prompt-based interactions to anticipate developers’ needs and proactively offer suggestions. This evolution is driven by the rise of AI agents, which can independently execute tasks, learn from their experiences and even collaborate with other agents. Next year, these agents will serve as a central hub for code assistance, streamlining the entire software development lifecycle. AI agents will autonomously write unit tests, refactor code for efficiency and even suggest architectural improvements. Developers’ roles will need to evolve alongside these advancements. AI will not replace them. Far from it; proactive AI assistants and their underlying agents will help developers build new skills and free up their time to focus on higher-value, more strategic tasks. ... AI models are more powerful when trained on internal company data, which allows them to generate insights specific to an organization’s unique operations and objectives. However, this often requires running models on premises for security and compliance reasons. With open source models rapidly closing the performance gap with commercial offerings, more businesses will deploy models on premises in 2025. This will allow organizations to fine-tune models with their own data and deploy AI applications at a fraction of the cost.


Cybercriminal groups embrace corporate structures to scale, sustain operations

We have seen cross collaboration between groups that specialize in specific activities. For example, one group specializes in social engineering, while another focuses on scaling malware and botnets to uncover open servers that yield database breaches. They, in turn, can sell access to those who focus on ransomware attacks. Recently, we have seen collaboration between AL/ML developers who scrape public records to build Org Charts, as well as lists of real estate holdings. This data is then used en masse with situational and location data to populate PDF attachments in emails that look like real invoices, with executives’ names in fake prior email responses, as part of the thread. ... the recent development in hackers organizing into larger groups has allowed the stakes to get even higher. Look at the Lazarus Group, who pulled off one of the largest heists ever by targeting Bybit and stealing $1.5 billion in Ethereum, as well as subsequently converting $300 million in unrecoverable funds. This group is likely state-sponsored and funding North Korean military programs. Therefore, understanding North Korean national interests will hint at future targets. The increasing scale of their attacks likely reflects greater resources allocated by North Korea, more sophisticated tooling and capabilities, lessons learned from previous operations, and a growing number of personnel trained in cyber operations.


Agentic AI might soon get into cryptocurrency trading — what could possibly go wrong?

Not everyone is bullish on the intersection of Web3, agentic AI and blockchain. Forrester Research vice president and principal analyst Martha Bennett is among those who are skeptical. In 2023, she co-authored an online post critical of Worldcoin, now the World project, and her opinion hasn’t changed in several regards. World project still faces major challenges, including privacy issues and concerns about its iris biometric technology, she said. And Agentic AI is still in its early stages and not yet capable of supporting Web3 transactions. Most current generative AI (genAI) tools, including LLMs, lack the autonomy defined as “agentic AI.” “There’s no AI technology today that would be able automate Web3 transactions in a reliable and secure manner,” she said. Given the risks and the potential for exploitation, it’s too soon to rely on AI systems with high autonomy for Web3 transactions. She did note, however, that Web3 already uses automation through smart contracts — self-executing electronic contracts with the terms of the agreement directly written into code. “Will Web3 go mainstream in 2025? My overall answer is no, but there are nuances,” she said. “If mainstream means mass consumer adoption, it’s a definite no. There’s simply not enough utility there for consumers.” Web3, Bennett said, is largely a self-contained financial ecosystem, and efforts to boost adoption through Decentralized Physical Infrastructure Networks (DePIN), such as Tools for Humanity’s, haven’t led to major breakthroughs.


Artificial Intelligence fuels rise of hard-to-detect bots 

“The surge in AI-driven bot creation has serious implications for businesses worldwide,” said Tim Chang, General Manager of Application Security at Thales. “As automated traffic accounts for more than half of all web activity, organisations face heightened risks from bad bots, which are becoming more prolific every day.” ... “This year’s report sheds light on the evolving tactics and techniques utilised by bot attackers. What were once deemed advanced evasion methods have now become standard practice for many malicious bots,” Chang said. “In this rapidly changing environment, businesses must evolve their strategies. It’s crucial to adopt an adaptive and proactive approach, leveraging sophisticated bot detection tools and comprehensive cybersecurity management solutions to build a resilient defense against the ever-shifting landscape of bot-related threats.” ... Analysis in the report reveals a deliberate strategy by cyber attackers to exploit API endpoints that manage sensitive and high-value data. Implications of this trend are especially impactful for industries that rely on APIs for their critical operations and transactions. Financial services, healthcare, and e-commerce sectors are bearing the brunt of these sophisticated bot attacks, making them prime targets for malicious actors seeking to breach sensitive information.


Humans at the helm of an AI-driven grid

A growing number of utilities are turning to AI-based tools to process vast data streams and streamline tasks once managed by manual calculation. For instance, algorithms can analyse weather patterns, historical consumption, and real-time sensor readings to make more accurate power demand and renewable energy generation forecasts. This supports more efficient balancing of supply and demand, reducing the likelihood of overloaded transformers or unexpected brownouts. Some utilities are also exploring AI-driven alarm management, which can filter the flood of alerts triggered by a network issue. Instead of operators sifting through hundreds of notifications, AI tools can be used to identify and highlight the most critical issues in real time. Another AI application is with congestion management, detecting trouble spots on the grid where demand might exceed capacity and even propose rerouting strategies to keep electricity flowing reliably. While still in their early stages, AI tools hold promise for driving operational efficiency in many daily scenarios. ... Even the smartest algorithm, however, lacks the broader perspective and accountability that people bring to grid management. Power and Utility companies are tasked with a public service mandate: they must ensure safety, affordability, and equitable access to electricity.


CISO Conversations: Maarten Van Horenbeeck, SVP & CSO at Adobe

The digital divide is simple to understand but complex to solve. Fundamentally, it separates those who have access to cyber and cyber knowledge from those who do not. There are areas of the world and socio-economic groups or demographics who have little or very limited access to the internet, and consequently very little awareness of cybersecurity. But cyber and cyber threats are worldwide; and technology is increasingly integrated and interconnected globally. “Cyber issues emanating from the digital divide don’t just play out far away from our homes – they play out very close to our homes as well,” warns Van Horenbeeck. “There’s a huge divide between people who know, for example, not to reuse passwords, to use multi factor authentication, and those individuals that have none of that experience at all.” In effect the digital divide creates a largely invisible and unseen threat surface for the long-connected world. He believes that technology companies can play a part in solving this problem by making cybersecurity features easy to understand and use. and cites two examples of the Adobe approach. “We invested, for example, in support for passkeys because we feel it’s a more effective and easier method of authentication that is also more secure.”


How AI, Robotics and Automation Transform Supply Chains

Enterprises designing robots to augment the human workforce need to take design thinking and ergonomic approaches into consideration. Designers must think about how robots comprehend and understand their physical surroundings without tripping over cables or objects on the floor, obstructing movement or causing human injuries. These robots are created with the aim to collaborate with humans for repetitive tasks and lift heavy loads. Last year, OT.today featured stories on how humanoid robots augmented the human workforce at Amazon, Mercedes, NASA and the Piaggio Group. In 2017, Alibaba invested in AI labs and the DAMO Academy. At its flagship Computing Conference in 2018, held in Hangzhou, China, Alibaba showcased a range of robots designed for warehouses, autonomous deliveries and other sectors, including hospitality and pharmaceuticals. More recently, Alibaba invested in LimX Dynamics, a company specializing in humanoid and robotic technology. Japanese automobile manufacturers have been using industrial robots since the early 1980s. Chip manufacturing companies in Taiwan and other countries also use them. Robots assist in surgeries in the healthcare sector. But none of those early manufacturing robots resembled humanoids or even had advanced AI seen in today's robots.


CIOs are overspending on the cloud — but still think it’s worth it

CIOs should also embrace DevOps practices tied to cost reduction when consuming cloud resources, Sellers says. One pitfall that doesn’t get enough attention: Many organizations don’t educate developers on the cost of cloud services, despite the glut of developer services large cloud providers make trivial to call. “I’ve lost track of how many services Amazon provides that developers can just use, and some of those can be quite expensive, but a developer doesn’t really know that,” Sellers says. “They’re like, ‘Instead of writing my own solution to this, I can just call this service that Amazon already provides, and boom, my job is done.’” The disconnect between developers and financial factors in the cloud is a real problem that leads to increased cloud costs, adds Nick Durkin, field CTO at Harness, provider of an AI-driven software development platform. Without knowing the costs of accessing a cloud-based GPU or CPU, for example, a developer is like a home builder who doesn’t know the cost of wood or brick, Durkin says. “If you’re not giving your smartest engineers access to the information about services that they can optimize on, how would you expect them to do it?” he says. “Then, finance comes back a month later with a beating stick.”

Daily Tech Digest - December 25, 2024

The promise and perils of synthetic data

Synthetic data is no panacea, however. It suffers from the same “garbage in, garbage out” problem as all AI. Models create synthetic data, and if the data used to train these models has biases and limitations, their outputs will be similarly tainted. For instance, groups poorly represented in the base data will be so in the synthetic data. “The problem is, you can only do so much,” Keyes said. “Say you only have 30 Black people in a dataset. Extrapolating out might help, but if those 30 people are all middle-class, or all light-skinned, that’s what the ‘representative’ data will all look like.” To this point, a 2023 study by researchers at Rice University and Stanford found that over-reliance on synthetic data during training can create models whose “quality or diversity progressively decrease.” Sampling bias — poor representation of the real world — causes a model’s diversity to worsen after a few generations of training, according to the researchers. Keyes sees additional risks in complex models such as OpenAI’s o1, which he thinks could produce harder-to-spot hallucinations in their synthetic data. These, in turn, could reduce the accuracy of models trained on the data — especially if the hallucinations’ sources aren’t easy to identify.


Federal Privacy Is Inevitable in The US (Prepare Now)

The writing’s on the wall for federal privacy. It’s simply not tenable for almost half the states having varying privacy thresholds and the other half with nothing. Our interconnected business and digital ecosystems need certainty and consistency across the country. Congress can and should stand up for American privacy. The good news? Recent history shows that sweeping reforms are possible. From the CHIPS and Science Act to major pandemic stimulus, lawmakers have shown their ability to meet moments with big regulations. While states deserve credit for filling the privacy void, federal action must follow. For now, there’s no time to waste. Enterprises that build privacy-ready operations today will be better positioned to thrive under future regulations, maintain customer trust, and turn compliance into a competitive advantage. On the other hand, slow-to-move companies risk regulatory penalties and loss of customer confidence in an increasingly privacy-conscious marketplace. Future-forward organizations recognize that investing in privacy isn’t just about compliance; it’s about building a sustainable competitive advantage in the data-driven economy. The choice is clear: invest in privacy now or play catch-up when federal mandates arrive.


AI use cases are going to get even bigger in 2025

Few sectors stand to gain more from AI advancements than defense. “We are witnessing a surge in applications like autonomous drone swarms, electronic spectrum awareness, and real-time battlefield space management, where AI, edge computing, and sensor technologies are integrated to enable faster responses and enhanced precision,” says Meir Friedland, CEO at RF spectrum intelligence company Sensorz. ... “AI is transforming genome sequencing, enabling faster and more accurate analyses of genetic data,” Khalfan Belhoul, CEO at the Dubai Future Foundation, tells Fast Company. “Already, the largest genome banks in the U.K. and the UAE each have over half a million samples, but soon, one genome bank will surpass this with a million samples.” But what does this mean? “It means we are entering an era where healthcare can truly become personalized, where we can anticipate and prevent certain diseases before they even develop,” Belhoul says. ... The potential for AI extends far beyond the use cases dominating today’s headlines. As Friedland notes, “AI’s future lies in multi-domain coordination, edge computing, and autonomous systems.” These advancements are already reshaping industries like manufacturing, agriculture, and finance.


2025 Will Be the Year That AI Agents Transform Crypto

The value of AI agents lies not just in their utility but in their potential to scale human capabilities. Agents are no longer just tools — they are emerging as participants in the on-chain economy, driving innovation across finance, gaming and decentralized social platforms. With protocols such as Virtuals and open-source frameworks like ELIZA, it’s becoming increasingly simple for developers to build, deploy and iterate AI agents that serve an increasingly diverse set of use cases. ... Unlike the core foundational AI models that are developed behind the walled gardens of OpenAI and Anthropic, AI agents are being innovated in the trenches of the crypto world. And for good reason. Blockchains provide the ideal infrastructure as they offer permissionless and frictionless financial rails, enabling agents to seed wallets, transact and send funds autonomously — tasks that would be unfeasible using traditional financial systems. In addition, the open-source nature of crypto allows developers to leverage existing frameworks to launch and iterate on agents faster than ever before. With more no-code platforms like Top Hat gaining traction, it’s only getting easier for anyone to be able to launch an agent in minutes. 


Unpacking OpenAI's Latest Approach to Make AI Safer

OpenAI said it used an internal reasoning model to generate synthetic examples of chain-of-thought responses, each referencing specific elements of the company's safety policy. Another model, referred to as the "judge," evaluated these examples to meet quality standards. The approach looks to address the challenges of scalability and consistency, OpenAI said. Human-labeled datasets are labor-intensive and prone to variability, but properly vetted synthetic data can theoretically offer a scalable solution with uniform quality. The method can potentially optimize training and reduce the latency and computational overhead associated with the models reading lengthy safety documents during inference. OpenAI acknowledged that aligning AI models with human safety values remains a challenge. Users continue to develop jailbreak techniques to bypass safety restrictions, such as framing malicious requests in deceptive or emotionally charged contexts. The o3 series models scored better than its peers Gemini 1.5 Flash, GPT-4o and Claude 3.5 Sonnet on the Pareto benchmark, which measures a model's ability to resist common jailbreak strategies. But the results may be of little consequence, as adversarial attacks evolve alongside improvements in model defenses.


The yellow brick road to agentic AI

Many believe this AI era is the most profound we’ve ever seen in tech. We agree and liken it to mobile’s role in driving on-premises workloads to the cloud and disrupting information technology. But we see this as even more impactful. But for AI agents to work we have to reinvent the software stack and break down 50 years of silo building. The emergence of data lakehouses is not the answer as they are just a bigger siloed asset. Rather, software as a service as we know it will be reimagined. Two prominent chief executives agree. At Amazon Web Services Inc.’s recent AWS re:Invent conference, we sat down with Amazon.com Inc. CEO Andy Jassy. ... There is a clear business imperative behind this shift. We believe companies will differentiate themselves by aligning end-to-end operations with a unified set of plans — from three-year strategic assumptions about demand to real-time, minute-by-minute decisions, such as how to pick, pack and ship individual orders to meet long-term goals. The function of management has always involved planning and resource allocation across various timescales and geographies, but previously there was no software capable of executing on these plans seamlessly across every time horizon.


The AI backlash couldn’t have come at a better time

Developers, engineers, operations personnel, enterprise architects, IT managers, and others need AI to be as boring for them as it has become for consumers. They need it not to be a “thing,” but rather something that is managed and integrated seamlessly into — and supported by — the infrastructure stack and the tools they use to do their jobs. They don’t want to endlessly hear about AI; they just want AI to seamlessly work for them so it just works for customers. ... The models themselves are also, rightly, growing more mainstream. A year ago they were anything but, with talk of potentially gazillions of parameters and fears about the legal, privacy, financial, and even environmental challenges such a data abyss would create. Those LLLMs are still out there, and still growing, but many organizations are looking for their models to be far less extreme. They don’t need (or want) a model that includes everything anyone ever learned about anything; rather, they need models that are fine-tuned with data that is relevant to the business, that don’t necessarily require state-of-the-art GPUs, and that promote transparency and trust. As Matt Hicks, CEO of Red Hat, put it, “Small models unlock adoption.”


Systems Thinking in Leading Transformation for the Future

The first step is aligning your internal goals with your external insights. Leaders must articulate a clear vision that ties the organization's purpose to broader societal and industry trends. For Nooyi and PepsiCo, that meant “starting from the outside.” Nooyi tasked her senior leaders with identifying external factors that would likely impact the company. She said, “They pointed to several megatrends … including a preoccupation with health and wellness, scarcity of water and other natural resources, constraints created by global climate change … and a talent market characterized by shortages of key people.” ... Systems thinking involves understanding the interdependencies within and outside an organization. For example, if you are embarking on any transformation project, you’ll likely need to explore new partnerships with suppliers and regional authorities and regulators. ... Using frameworks like OKRs (Objectives and Key Results), you can evaluate how each initiative within your transformation program contributes to the overarching objective. For example, a laudable main aim such as a commitment to environmental sustainability would likely involve numerous associated projects: for example, water conservation, waste reduction, and reduced carbon footprint.


The 2024 cyberwar playbook: Tricks used by nation-state actors

While nation-state actors loved zero days for swift break-ins, phishing remained a sly plan B. It let them craft sneaky schemes to worm into systems, proving that 2024 was the year of both bold strikes and artful cons. Russian nation-state actors leaned heavily on phishing in 2024, with other APTs, like Iranian and Pakistani groups, dabbling in the tactic as well. The following are some of the standout campaigns from 2024 where phishing was the go-to for initial access. ... While credential harvesting through malware delivered via phishing was fairly common, nation-state actors rarely resorted to scavenging credentials from hack forums or drop sites as a primary tactic. When asked, Hughes noted, “I’m not familiar with this being the primary MO by the APTs, who instead are targeting devices, products and vendors with vulnerabilities and misconfigurations, but once inside, they do compromise credentials and use those to pivot, move laterally, persist in environments and more.” ... These actors weren’t always about flashy, custom malware. Quite often, they used legit tools like PowerShell, rootkits, RDP, and other off-the-shelf system features to sneak in, stay undetected, and set up long-term access. This made their attacks stealthy, persistent, and ready for future moves. 


Generative AI is now a must-have tool for technology professionals

As part of this trend, "we are witnessing developers shift from writing code to orchestrating AI agents," said Jithin Bhasker, general manager and vice president at ServiceNow. The efficiency gained from gen AI adoption by technologists isn't just about personal productivity, it's urgent "with the projected shortage of half a million developers by 2030 and the need for a billion new apps," he added. ... Still, as gen AI becomes a commonplace tool in technology shops, Berent-Spillson advises caution. "The real game-changer here is speed, but there's a catch," he said. "While AI can dramatically compress cycle time, it will also amplify any existing process constraints. Think of it like adding a supercharger to your car -- if your chassis isn't solid, you're just going to get to the problem faster." Exercise caution "regarding code quality, maintainability, and IP considerations," McDonagh-Smith advises. "While syntactically correct, AI tools have been seen to create code that's logically flawed or inefficient, leading to potential code degradation over time if not reviewed carefully. We should also guard against software sprawl where the ease of creating AI-generated code results in overly complex or unnecessary code that might make projects more difficult to maintain over time."



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - August 17, 2024

The importance of connectivity in IoT

There is no point in having IoT if the connectivity is weak. Without reliable connectivity, the data from sensors and devices, which are intended to be collected and analysed in real-time, might end up being delayed when they are eventually delivered. In healthcare, in real-time, connected devices monitor the vital signs of the patient in an intensive-care ward and alert the physician to any observations that are outside of the specified limits. ...  The future evolution of connectivity technologies will combine with IoT to significantly expand its capabilities. The arrival of 5G will enable high-speed, low-latency connections. This transition will usher in IoT systems that were previously impossible, such as self-driving vehicles that instantaneously analyse vehicle states and provide real-time collision avoidance. The evolution of edge computing will bring data-processing closer to the edge (the IoT devices), thereby significantly reducing latency and bandwidth costs. Connectivity underpins almost everything we see as important with IoT – the data exchange, real-time usage, scale and interoperability we access in our systems.


Aren’t We Transformed Yet? Why Digital Transformation Needs More Work

When it comes to enterprise development, platforms alone can’t address the critical challenge of maintaining consistency between development, test, staging, and production environments. What teams really need to strive for is a seamless propagation of changes between environments made production-like through synchronization and have full control over the process. This control enables the integration of crucial safety steps such as approvals, scans, and automated testing, ensuring that issues are caught and addressed early in the development cycle. Many enterprises are implementing real-time visualization capabilities to provide administrators and developers with immediate insight into differences between instances, including scoped apps, store apps, plugins, update sets, and even versions across the entire landscape. This extended visibility is invaluable for quickly identifying and resolving discrepancies before they can cause problems in production environments. A lack of focus on achieving real-time multi-environment visibility is akin to performing a medical procedure without an X-ray, CT, or MRI of the patient. 


Why Staging Doesn’t Scale for Microservice Testing

So are we doomed to live in a world where staging is eternally broken? As we’ve seen, traditional approaches to staging environments are fraught with challenges. To overcome these, we need to think differently. This brings us to a promising new approach: canary-style testing in shared environments. This method allows developers to test their changes in isolation within a shared staging environment. It works by creating a “shadow” deployment of the services affected by a developer’s changes while leaving the rest of the environment untouched. This approach is similar to canary deployments in production but applied to the staging environment. The key benefit is that developers can share an environment without affecting each other’s work. When a developer wants to test a change, the system creates a unique path through the environment that includes their modified services, while using the existing versions of all other services. Moreover, this approach enables testing at the granularity of every code change or pull request. This means developers can catch issues very early in the development process, often before the code is merged into the main branch. 


A world-first law in Europe is targeting artificial intelligence. Other countries can learn from it

The act contains a list of prohibited high-risk systems. This list includes AI systems that use subliminal techniques to manipulate individual decisions. It also includes unrestricted and real-life facial recognition systems used by by law enforcement authorities, similar to those currently used in China. Other AI systems, such as those used by government authorities or in education and healthcare, are also considered high risk. Although these aren’t prohibited, they must comply with many requirements. ... The EU is not alone in taking action to tame the AI revolution. Earlier this year the Council of Europe, an international human rights organisation with 46 member states, adopted the first international treaty requiring AI to respect human rights, democracy and the rule of law. Canada is also discussing the AI and Data Bill. Like the EU laws, this will set rules to various AI systems, depending on their risks. Instead of a single law, the US government recently proposed a number of different laws addressing different AI systems in various sectors. ... The risk-based approach to AI regulation, used by the EU and other countries, is a good start when thinking about how to regulate diverse AI technologies.


Building constructive partnerships to drive digital transformation

The finance team needs to have a ‘seat at the table’ from the very beginning to overcome these challenges and effect successful transformation. Too often, finance only becomes involved when it comes to the cost and financing of the project, and when finance leaders do try to become involved, they can have difficulty gaining access to the needed data. This was recently confirmed by members of the Future of Finance Leadership Advisory Group, where almost half of the group polled (47%) noted challenges gaining access to needed data. As finance professionals understand the needs of stakeholders within the business, they are in the best position to outline what is needed for IT to create an effective, efficient structure. Finance professionals are in-house consultants who collaborate with other functions to understand their workings and end-to-end procedures, discover where both problems and opportunities exist, identify where processes can be improved, and ultimately find solutions. Digital transformation projects rely on harmonizing processes and standardizing systems across different operations. 


DevSecOps: Integrating Security Into the DevOps Lifecycle

The core of DevSecOps is ‘security as code’, a principle that dictates embedding security into the software development process. To keep every release tight on security, we weave those practices into the heart of our CI/CD flow. Automation is key here, as it smooths out the whole security gig in our dev process, ensuring we are safe from the get-go without slowing us down. A shared responsibility model is another pillar of DevSecOps. Security is no longer the sole domain of a separate security team but a shared concern across all teams involved in the development lifecycle. Working together, security isn’t just slapped on at the end but baked into every step from start to finish. ... Adopting DevSecOps is not without its challenges. Shifting to DevSecOps means we’ve got to knock down the walls that have long kept our devs, ops and security folks in separate corners. Balancing the need for rapid deployment with security considerations can be challenging. To nail DevSecOps, teams must level up their skills through targeted training. Weaving together seasoned systems with cutting-edge DevSecOps tactics calls for a sharp, strategic approach. 


Critical Android Vulnerability Impacting Millions of Pixel Devices Worldwide

This backdoor vulnerability, undetectable by standard security measures, allows unauthorized remote code execution, enabling cybercriminals to compromise devices without user intervention or knowledge due to the app’s privileged system-level status and inability to be uninstalled. The Showcase.apk application possesses excessive system-level privileges, enabling it to fundamentally alter the phone’s operating system despite performing a function that does not necessitate such high permissions. An application’s configuration file retrieval lacks essential security measures, such as domain verification, potentially exposing the device to unauthorized modifications and malicious code execution through compromised configuration parameters. The application suffers from multiple security vulnerabilities. Insecure default variable initialization during certificate and signature verification allows bypass of validation checks. Configuration file tampering risks compromise, while the application’s reliance on bundled public keys, signatures, and certificates creates a bypass vector for verification.


Using Artificial Intelligence in surgery and drug discovery

“We’re seeing how AI is adapting, learning, and starting to give us more suggestions and even take on some independent tasks. This development is particularly thrilling because it spans across diagnostics, therapeutics, and theranostics—covering a wide range of medical areas. We’re on the brink of AI and robotics merging together in a very meaningful way,” Dr Rao said. However, he said he would like to add a word of caution. He said he often tells junior enthusiasts who are eager to use AI in everything: AI is not a replacement for natural stupidity. ... He said that one of the most impressive applications of this AI was during the preparation of a US FDA application, which is typically a very cumbersome and expensive process. “At that point, I’d already completed the preclinical phase but wasn’t certain about the additional 20-30 tests I might need. Instead of spending hundreds of thousands of dollars on trial and error, we fed all our data into this AI system. Now, it’s important to note that pharma companies are usually reluctant to share their proprietary data, so gathering information is often a challenge,” he said.  


Mastercard Is Betting on Crypto—But Not Stablecoins

“We’re opening up this crypto purchase power to our 100 million-plus acceptance locations,” Raj Dhamodharan, Mastercard's head of crypto and blockchain, told Decrypt. “If consumers want to buy into it, if they want to be able to use it, we want to enable that—in a safe way.” Perhaps in the name of safety, the new MetaMask Card isn’t compatible with most cryptocurrencies. You can’t use it to buy a plane ticket with Pepecoin, or a sandwich with SHIB. The card is only compatible with dominant stablecoins USDT and USDC, as well as wrapped Ethereum. ... Dhamodharan and his team are currently endeavoring to create an alternative system to stablecoins that—instead of putting crypto companies like Circle and Tether in the catbird seat of the new digital economy—keeps payment services like Mastercard, and traditional banks, at center. Key to this plan is unlocking the potential of bank deposits, which already exist on digital ledgers—just not ones that live on-chain. Dhamodharan estimates that some $15 trillion worth of digital bank deposits currently exist in the United States alone.


A Group Linked To Ransomhub Operation Employs EDR-Killing Tool

Experts believe RansomHub is a rebrand of the Knight ransomware. Knight, also known as Cyclops 2.0, appeared in the threat landscape in May 2023. The malware targets multiple platforms, including Windows, Linux, macOS, ESXi, and Android. The operators used a double extortion model for their RaaS operation. Knight ransomware-as-a-service operation shut down in February 2024, and the malware’s source code was likely sold to the threat actor who relaunched the RansomHub operation. ... “One main difference between the two ransomware families is the commands run through cmd.exe. While the specific commands may vary, they can be configured either when the payload is built or during configuration. Despite the differences in commands, the sequence and method of their execution relative to other operations remain the same.” states the report published by Symantec. Although RansomHub only emerged in February 2024, it has rapidly grown and, over the past three months, has become the fourth most prolific ransomware operator based on the number of publicly claimed attacks.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney

Daily Tech Digest - November 12, 2023

The metaverse has virtually disappeared. Here's why it's generative AI's fault

"It's basically going through the Gartner Hype Cycle for Emerging Technologies," she says. "We've had the hype and now we're seeing the reality. The metaverse was capturing people's imagination. But we're still looking for proven use cases that are going to generate value." Searle's assertion that the metaverse is suffering a familiar fate to other over-hyped technologies is certainly one explanatory factor for the drop in interest in the metaverse. But another huge contributory factor is the rapid rise of artificial intelligence (AI). ... Of course, the rapid take up of generative AI isn't the only narrative in this story; there's a whole series of potential concerns, such as hallucinations, plagiarism, and ethics, that need to be dealt with sooner rather than later. But if you want to impress your family and friends with a tool that seems to work like magic, then generative AI is the one. On the other hand, the metaverse -- just like the blockchain before it -- feels a bit like a rabbit that's stuck in a magician's hat. Entering the metaverse often isn't as easy as its proponents have promised. 


Why the service industry needs blockchain, explained

The difficulty of integrating blockchain with existing infrastructure and processes is a significant obstacle. Because service providers frequently use a variety of platforms and technologies, achieving seamless integration can be difficult. It might be difficult to protect data security and privacy while still adhering to regulations. Blockchain’s transparency conflicts with the requirement to protect sensitive customer information, necessitating careful design and implementation of privacy measures. Another major challenge is establishing communication and data exchange across various blockchain networks and traditional systems. To facilitate seamless interoperability, service providers need to spend time developing standardized protocols, which can be expensive and time-consuming. Moreover, there are scalability concerns. Blockchain networks, especially public ones, may face limitations in handling a high volume of transactions efficiently. Delays and higher expenses may result from this, especially in service industries where several quick transactions are necessary.


Why developer productivity isn’t all about tooling and AI

Creative work requires some degree of isolation. Each time they sit down to code, developers build up context for what they’re doing in their head; they play a game with their imagination where they’re slotting their next line of code into the larger picture of their project so everything fits together. Imagine you’re holding all this context in your head — and then someone pings you on Slack with a small request. All the context you’ve built up collapses in that instant. It takes time to reorient yourself. It’s like trying to sleep and getting woken up every hour. ... Another factor that gets in the way of developer productivity is a lack of clarity on what engineers are supposed to be doing. If developers have to spend time trying to figure out the requirements of what they’re building while they’re building it, they’re ultimately doing two types of work: Prioritization and coding. These disparate types of work don’t mesh. Figuring out what to build requires conversations with users, extensive research, talks with stakeholders across the organization and other tasks well outside the scope of software development. 


Here’s What a Software Architect Does in an Agile Team

An architect is probably not a valid role on an agile team. I admit I have at times been overzealous with non-coding members of a dev team. The less militant version of this is to be aware of ‘pigs’ and ‘chickens’ in the agile sense. When making breakfast, chickens lay eggs but pigs literally have skin in the game. So only pigs should attend daily agile stand ups. There are three problems with the role of architect in classic agile. Think of these as Lutheran protestant theses nailed to the door — or more likely to the planning wall.There are no upfront design phases in agile. “The best architectures, requirements, and designs emerge from self-organizing teams”. An architect cannot be an approver and cause of delay. This leads to the idea that architectural know-how should be spread out amongst the other team members. This is often the case — however it elides the fact that architectural responsibility doesn’t fall to anyone, even if people feel they may be accountable. Remember your RACI matrix. Should all agile developers be architects in a project? This makes little sense, since architecture describes a singular plan. 


AI’s Ability to Reason: Statistics vs. Logic

As a simplistic existence proof that today’s AI does not reason with logic, consider the following problem in basic algebra which was given to Bing/OpenAI GPT to solve. The gist of the problem shown in the figure below is that there are two rectangles, each having the same height (though this detail is not clearly stated in the sourcing 6th grade math text) but different widths. Areas for each are given. The rectangles are positioned in the corresponding math text to suggest that they may be aggregated into a larger rectangle having a width that is the sum of the widths of the smaller rectangles — maybe as a hint toward length. The request to find the length (height) and widths is a test to see whether OpenAI’s GPT via Bing would determine if there are sufficient equations matching unknowns. There aren’t. GPT didn’t discover the number of equations is one too few. Instead, it attempted to find length and widths, and it responded suggesting it had successfully solved the math problem. Everything started to go amuck when the insufficiency of the number of equations matched to the number of unknowns was missed, and the third equation given above simply is a function of the other two.


Security Is EVERYBODY’s Business, But CISOs Need to Lead

Cybersecurity is not an audit or internal audit. There is a fine line of difference there. And as much as the CISO is seen as somewhat more of an enforcer, they need to be seen as an enabler to the business. CISOs need to have very direct, effective, and transparent communication with the board members when it comes to quantification of everything that they’re doing. And when I say quantification, what I mean is quantification of risks to the organization. Some of the board members will be closer to cybersecurity risks. Some of them may be closer to a reputational risk or a financial risk. But if a CSO can stitch that story together and quantify it for the audience of the board, I think that goes a long way. That’s what’s needed because, in the situation in the market that we are in right now, with the threat landscape changing, with new capabilities coming into play, I think it’s critical. CISOs need to ensure the message is articulated well in the boardroom.


How Agile Managers Use Uncertainty to Create Better Decisions Faster

Here's the problem I see with big, long-term, and final management decisions: the decision is too large to have any certainty at all. Remember I said I don't take long consulting engagements? Early in my consulting career, I learned that even a “guaranteed” consulting project was not a guarantee at all. Sure, the client might pay a kill fee (a portion of the unused project budget), but most of the time, the client said (on a Friday afternoon), “Thanks. The world has changed. Don't come back on Monday.” While I always continued my marketing so my business would survive, I felt as if the clients cheated themselves. Because we thought we had more time, we didn't create smaller goals and achieve them. Our work was incomplete—according to their goals. And that's what people remember. Not that they changed the circumstances, but that we didn't finish. That's exactly what happens when managers try to decide for a long time without revisiting their decisions. The world changes. If the world changes enough, the managers feel the need to lay people off, not just stop efforts. Those layoffs are a result of too-long and too-large management decisions.


Technical and digital debt can devastate your digital ambition

Of course, no organisation can afford zero technical debt (is this even possible?). The judgement here is targeting existing technical debt in a priority order. Deciding what not to do is just as important as what to do. You will be better able to manage the high expectations of stakeholders, shape the transformation and prioritise investment when you have this insight. Ask yourself these questions:What technical debt will act as an anchor when trying to increase the pace of change, irrespective of how fast your new IT engineering and product-based approaches to change are? Or to put it another way, which single piece of technical debt will limit the flow of value, irrespective of how slick everything else is? To be able to adapt at pace, at short notice, responding to market opportunities, where is your underlying technology strong but resistant to change? Customers just expect your digital channels to work; where must you improve the reliability of your service? Where can you increase cost effectiveness or risk mitigation through targeted automation as one of the treatment strategies available to you?


How AI and Crypto is Transforming the Future of Decentralized Finance

As time has passed, the crypto industry has evolved into a breeding ground for fraudulent activities and deception. Safeguarding investors from fraud has become increasingly vital, especially with the influx of initial coin offerings and new platforms entering the market. The encouraging news is that AI and crypto can effectively prevent fraud attempts and ensure that investors adhere to financial compliance. AI bots, for example, can detect and flag fraudulent transactions, preventing them from proceeding unless confirmed by a human. Confirming crypto transactions often takes up to 24 hours due to reliance on consensus methods. However, cases of transaction delays often pose a challenge for the crypto sector. With some recent advancements in AI technology, there have been some enhanced trade management options. Some companies are adopting innovative consensus methods that significantly reduce transaction times to just a few seconds. This improvement holds potential benefits for the 


Secure Together: ATO Defense for Businesses and Consumers

First off, businesses need to take the lead in forming a stronger partnership with their customers. This means educating both customers and employees on proper security measures. Websites operating with user accounts, engaging individuals and corporations, often find themselves in the crosshairs of swindlers intent on ATO. We mentioned above that phishing is a common tactic. It’s imperative to consistently enlighten customers and employees about the looming menace of online security breaches like phishing, including how phishing attempts trick people and tips for not getting tangled. Adopt a vigilant stance on security by ingraining robust preventive protocols, including routine password updates and providing guidelines for safeguarding user credentials. ... Training does not end there. The MGM Resorts cyberattack we cited above also involved a fraudster tricking a customer support help desk. Businesses must train their staff on how to stop these attempted breaches — for example, by knowing how to ask questions that only a legitimate account holder could know the answer to.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley