Daily Tech Digest - November 15, 2024

Beyond the breach: How cloud ransomware is redefining cyber threats in 2024

Unlike conventional ransomware that targets individual computers or on-premises servers, attackers are now setting their sights on cloud infrastructures that host vast amounts of data and critical services. This evolution represents a new frontier in cyber threats, requiring Indian cybersecurity practitioners to rethink and relearn defence strategies. Traditional security measures and last year’s playbooks are no longer sufficient. Attackers are exploiting misconfigured or poorly secured cloud storage platforms such as Amazon Web Services (AWS) Simple Storage Service (S3) and Microsoft Azure Blob Storage. By identifying cloud storage buckets with overly permissive access controls, cybercriminals gain unauthorised entry, copy data to their own servers, encrypt or delete the original files, and then demand a ransom for their return. ... Collaboration and adaptability are essential. By understanding the unique challenges posed by cloud security, Indian organisations can implement comprehensive strategies that not only protect against current threats but also anticipate future ones. Proactive measures—such as strengthening access controls, adopting advanced threat detection technologies, training employees, and staying informed—are crucial steps in defending against these evolving attacks.


Harnessing AI’s Potential to Transform Payment Processing

There are many use cases that show how AI increases the speed and convenience of payment processing. For instance, Apple Pay now offers biometric authentication, which uses AI facial recognition and fingerprint scanning to authenticate users. This enables mobile payment customers to use quick and secure authentication without remembering passwords or PINs. Similarly, Apple Pay’s competitor, PayPal, uses AI for real-time fraud detection, employing ML algorithms to monitor transactions for signs of fraud and ensure that customers’ financial information remains secure. ... One issue is AI systems rely on massive amounts of data, including sensitive data, which can lead to data breaches, identity theft, and compliance issues. In addition, AI algorithms trained on biased data can perpetuate those biases. Making matters worse, many AI systems lack transparency, so the bias may grow and lead to unequal access to financial services. Another issue is the potential dependence on outside vendors, which is common with many AI technologies. ... To reduce the current risks associated with AI and safely unleash its full potential to improve payment processing, it is imperative for organizations to take a multi-layered approach that includes technical safeguards, organizational policies, and regulatory compliance. 


Do you need an AI ethicist?

The goal of advising on ethics is not to create a service desk model, where colleagues or clients always have to come back to the ethicist for additional guidance. Ethicists generally aim for their stakeholders to achieve some level of independence. “We really want to make our partners self-sufficient. We want to teach them to do this work on their own,” Sample said. Ethicists can promote ethics as a core company value, no different from teamwork, agility, or innovation. Key to this transformation is an understanding of the organization’s goal in implementing AI. “If we believe that artificial intelligence is going to transform business models…then it becomes incumbent on an organization to make sure that the senior executives and the board never become disconnected from what AI is doing for or to their organization, workforce, or customers,” Menachemson said. This alignment may be especially necessary in an environment where companies are diving head-first into AI without any clear strategic direction, simply because the technology is in vogue. A dedicated ethicist or team could address one of the most foundational issues surrounding AI, notes Gartner’s Willemsen. One of the most frequently asked questions at a board level, regardless of the project at hand, is whether the company can use AI for it, he said. 


Why We Need Inclusive Data Governance in the Age of AI

Inclusive data governance processes involve multiple stakeholders, giving equal space in this decision making to diverse groups from civil society, as well as space for direct representation of affected communities as active stakeholders. This links to, but is an idea broader than, the concept of multi-stakeholder governance for technology, which first came to prominence at the international level, in institutions such as the Internet Corporation for Assigned Names and Numbers and the Internet Governance Forum. ... Involving the public and civil society in decisions about data is not cost-free. Taking the steps that are needed to surmount the practical challenges, and skepticism about the utility of public involvement in a technical and technocratic field, frequently requires arguments that go beyond it being the right thing to do. ... The risks for people, communities and society, but also for organizations operating within the data and AI marketplace and supply chain, can be reduced through greater inclusion earlier in the design process. But organizational self-interest will not motivate the scope or depth that is required. Reducing the reality and perception of “participation-washing” means requirements for consultation in the design of data and AI systems need to be robust and enforceable. 


Strategies to navigate the pitfalls of cloud costs

If cloud customers spend too much money, it’s usually because they created cost-ineffective deployments. It’s common knowledge that many enterprises “lifted and shifted” their way to the clouds with little thought about how inefficient those systems would be in the new infrastructure. ... Purposely or not, public cloud providers created intricate pricing structures that are nearly incomprehensible to anyone who does not spend each day creating cloud pricing structures to cover every possible use. As a result, enterprises often face unexpected expenses. Many of my clients frequently complain that they have no idea how to manage their cloud bills because they don’t know what they’re paying for. ... Cloud providers often encourage enterprises to overprovision resources “just in case.” Enterprises still pay for that unused capacity, so the misalignment dramatically elevates costs without adding business value. When I ask my clients why they provision so much more storage or computing resources beyond what their workload requires, the most common answer is, “My cloud provider told me to.” ... One of the best features of public cloud computing is autoscaling so you’ll never run out of resources or suffer from bad performance due to insufficient resource provisioning. However, autoscaling often leads to colossal cloud bills because it often is triggered without good governance or purpose.


Your IT Team Isn't Ready For Change Management If They Can't Answer These 3 Questions

Testing software before you encounter failure rates is key, but never should you be exposed to failure rates with this level of real world impact. Whether it’s due to third party systems or the companies themselves, their brand will be the one in tatters due to the end customer experience. Enter Change Management and the possibility for, if done right, the prevention of these kinds of enormous IT failures. ... The ever-evolving nature of technology, including cloud scaling, infrastructure as code, and frequent updates such as ‘Patch Tuesday’ means that organisations must constantly adapt to change. However, this constant change introduces challenges such as “drift”—a term that refers to the unplanned deviations from standard configurations or expected states within an IT environment. Think of it like a pesky monkey in the machine. Drift can occur subtly and often goes unnoticed until it causes significant disruptions. It also increases uncertainty and doubt in the organisation making Change Management and Release Management harder, creating difficulties to plan and execute changes safely. ... To be effective, Change Management needs to be able to detect and understand drift in the environment to have a full understanding of Current State, Risk Assessment and Expected Outcomes. 


RIP Open Core — Long Live Open Source

Open-core was originally popular because it allowed companies to build a community around a free product version while charging for a more full, enterprise-grade version. This setup thrived in the 2010s, helping companies like MongoDB and Redis gain traction. But times have changed, and today, instead of enhancing a company’s standing, open-core models often create more problems than they solve. ... While open-core and source-available models had their moment, companies are beginning to realize the importance of true open source values and are finding their way back. This return to open source is a sign of growth, with businesses realigning with the collaborative spirit at the core (wink) of the OSS community. More companies are adopting models that genuinely prioritize community engagement and transparency rather than using them as marketing or growth tactics. ... As the open-core model fades, we’re seeing a more sustainable approach take shape: the Open-Foundation model. This model allows the open-source offering to be the backbone of a commercial offering without compromising the integrity of the OSS project. Rather, it reinforces it as a valuable, standalone product that supports the commercial offering instead of competing against it.


Why SaaS Backup Matters: Protecting Data Beyond Vendor Guarantees

Most IT departments have long recognized the importance of backup and recovery for applications and data that they host themselves. When no one else is managing your workloads and backing them up, having a recovery plan to restore them if necessary is essential for minimizing the risk that a failure could disrupt business operations. But when it comes to SaaS, IT operations teams sometimes think in different terms. That's because SaaS applications are hosted and managed by external vendors, not the IT departments of the businesses that use SaaS apps. In many cases, SaaS vendors provide uptime or availability guarantees. They don't typically offer details about exactly how they back up applications and data or how they'll recover data in the event of a failure, but a backup guarantee is typically implicit in SaaS products. ... Historically, SaaS apps haven't featured prominently, if at all, in backup and recovery strategies. But the growing reliance on SaaS apps — combined with the many risks that can befall SaaS application data even if the SaaS vendor provides its own backup or availability guarantees — makes it critical to integrate SaaS apps into backup and recovery plans. The risks of not backing up SaaS have simply become too great.


Biometrics in the Cyber World

Biometrics is known to strengthen security in many ways. Some ways this could include stronger authentication, user convenience, and reduction of risk of identity theft. With the uniqueness of the user, this adds a layer of security to authentication. Traditional authentication, such as passwords, contains weak combinations that can easily be breached, so using biometrics can prevent that. People constantly forget their passwords and always end up resetting them. Since biometrics is entirely connected with one’s identity, it will no longer be an inconvenience to forget a password. Since it is very usual for hackers to attempt to get into an account by guessing the password, biometrics does not accommodate this since a hacker cannot guess a password when the uniqueness of one’s identity is what is sought for authentication. This is one of the pros of biometric systems, which should reduce the risk of identity theft. Some challenges that biometrics could have include privacy concerns, false positives and negatives, and bias. Since it is personal information, biometric data can lead to privacy violations. Storage of personal data, which is sensitive, needs to be following the regulations on privacy such as GDPR and CCPA. 


Data Architectures in the AI Era: Key Strategies and Insights

Accelerating results in data architecture initiatives can be achieved in a much quicker fashion if you start with the minimum needed and build from there for your data storage. Begin by considering all use cases and finding the one component needed to develop so a data product can be delivered. Expansion can happen over time with use and feedback, which will actually create a more tailored and desirable product. ... Educating your key personnel on the importance of being able and ready to make the shift from previously familiar legacy data systems to modern architectures like data lakehouses or hybrid cloud platforms. Migration to a unified, hybrid, or cloud-based data management system may seem challenging initially, but it is essential for enabling comprehensive data lifecycle management and AI-readiness. By investing in continuous education and training, organizations can enhance data literacy, simplify processes, and improve long-term data governance, positioning themselves for scalable and secure analytics practices. ... By being prepared for the typical challenges of AI, problems can be predicted and anticipated which can help to reduce downtime and frustration in the modernization of data architecture. 



Quote for the day:

“The final test of a leader is that he leaves behind him in other men the conviction and the will to carry on.” – Walter Lippmann

Daily Tech Digest - November 14, 2024

Where IT Consultancies Expect to Focus in 2025

“Much of what’s driving conversations around AI today is not just the technology itself, but the need for businesses to rethink how they use data to unlock new opportunities,” says Chaplin. “AI is part of this equation, but data remains the foundation that everything else builds upon.” West Monroe also sees a shift toward platform-enabled environments where software, data, and platforms converge. “Rather than creating everything from scratch, companies are focusing on selecting, configuring, and integrating the right platforms to drive value. The key challenge now is helping clients leverage the platforms they already have and making sure they can get the most out of them,” says Chaplin. “As a result, IT teams need to develop cross-functional skills that blend software development, platform integration and data management. This convergence of skills is where we see impact -- helping clients navigate the complexities of platform integration and optimization in a fast-evolving landscape.” ... “This isn’t just about implementing new technologies, it’s about preparing the workforce and the organization to operate in a world where AI plays a significant role. ...”


How Is AI Shaping the Future of the Data Pipeline?

AI’s role in the data pipeline begins with automation, especially in handling and processing raw data – a traditionally labor-intensive task. AI can automate workflows and allow data pipelines to adapt to new data formats with minimal human intervention. With this in mind, Harrisburg University is actively exploring AI-driven tools for data integration that leverage LLMs and machine learning models to enhance and optimize ETL processes, including web scraping, data cleaning, augmentation, code generation, mapping, and error handling. These adaptive pipelines, which automatically adjust to new data structures, allow companies to manage large and evolving datasets without the need for extensive manual coding. ... Beyond immediate operational improvements, AI is shaping the future of scalable and sustainable data pipelines. As industries collect data at an accelerating rate, traditional pipelines often struggle to keep pace. AI’s ability to scale data handling across various formats and volumes makes it ideal for supporting industries with massive data needs, such as retail, logistics, and telecommunications. In logistics, for example, AI-driven pipelines streamline inventory management and optimize route planning based on real-time traffic data. 


Innovating with Data Mesh and Data Governance

Companies choose a data mesh to overcome the limitations of “centralized and monolithic” data platforms, as noted by Zhamak Dehghani, the director of emerging technologies at Thoughtworks. Technologies like data lakes and warehouses try to consolidate all data in one place, but enterprises can find that the data gets stuck there. A company might have only one centralized data repository – typically a team such as IT – that serves the data up to everyone else in the company. This slows down data access because of bottlenecks. For example, having already taken days to get HR privacy approval, the finance department’s data access requests might then sit in the inbox of one or two people in IT for additional days. Instead, a data mesh puts data control in the hands of each domain that serves that data. Subject matter experts (SMEs) in the domain control how this data is organized, managed, and delivered. ... Data mesh with federated Data Governance balances expertise, flexibility, and speed with data product interoperability among different domains. With a data mesh, the people with the most knowledge about their subject matter take charge of their data. In the future, organizations will continue to face challenges in providing good, federated Data Governance to access data through a data mesh.


The Agile Manifesto was ahead of its time

A fundamental idea of the agile methodology is to alleviate this and allow for flexibility and changing requirements. The software development process should ebb and flow as features are developed and requirements change. The software should adapt quickly to these changes. That is the heart and soul of the whole Agile Manifesto. However, when the Agile Manifesto was conceived, the state of software development and software delivery technology was not flexible enough to fulfill what the manifesto was espousing. But this has changed with the advent of the SaaS (software as a service) model. It’s all well and good to want to maximize flexibility, but for many years, software had to be delivered all at once. Multiple features had to be coordinated to be ready for a single release date. Time had to be allocated for bug fixing. The limits of the technology forced software development teams to be disciplined, rigid, and inflexible. Delivery dates had to be met, after all. And once the software was delivered, changing it meant delivering all over again. Updates were often a cumbersome and arduous process. A Windows program of any complexity could be difficult to install and configure. Delivering or upgrading software at a site with 200 computers running Windows could be a major challenge.


Improving the Developer Experience by Deploying CI/CD in Databases

Characteristically less mature than CI/CD for application code, CI/CD for databases enables developers to manage schema updates such as changes to table structures and relationships. This management ability means developers can execute software updates to applications quickly and continuously without disrupting database users. It also helps improve quality and governance, creating a pipeline everyone follows. The CI stage typically involves developers working on code simultaneously, helping to fix bugs and address integration issues in the initial testing process. With the help of automation, businesses can move faster, with fewer dependencies and errors and greater accuracy — especially when backed up by automated testing and validation of database changes. Human intervention is not needed, resulting in fewer hours spent on change management. ... Deploying CI/CD for databases empowers developers to focus on what they do best: Building better applications. Businesses today should decide when, not if, they plan to implement these practices. For development leaders looking to start deploying CI/CD in databases, standardization — such as how certain things are named and organized — is a solid first step and can set the stage for automation in the future. 


To Dare or not to Dare: the MVA Dilemma

Business stakeholders must understand the benefits of technology experiments in terms they are familiar with, regarding how the technology will better satisfy customer needs. Operations stakeholders need to be satisfied that the technology is stable and supportable, or at least that stability and supportability are part of the criteria that will be used to evaluate the technology. Wholly avoiding technology experiments is usually a bad thing because it may miss opportunities to solve business problems in a better way, which can lead to solutions that are less effective than they would be otherwise. Over time, this can increase technical debt. ... These trade-offs are constrained by two simple truths: the development team doesn’t have much time to acquire and master new technologies, and they cannot put the business goals of the release at risk by adopting unproven or unsustainable technology. This often leads the team to stick with tried-and-true technologies, but this strategy also has risks, most notably those of the hammer-nail kind in which old technologies that are unsuited to novel problems are used anyway, as in the case where relational databases are used to store graph-like data structures.


2025 API Trend Reports: Avoid the Antipattern

Modern APIs aren’t all durable, full-featured products, and don’t need to be. If you’re taking multiple cross-functional agile sprints to design an API you’ll use for less than a year, you’re wasting resources building a system that will probably be overspecified and bloated. The alternative is to use tools and processes centered around an API developer’s unit of work, which is a single endpoint. No matter the scope or lifespan of an API, it will consist of endpoints, and each of those has to be written by a developer, one at a time. It’s another way that turning back to the fundamentals can help you adapt to new trends. ... Technology will keep evolving, and the way we employ AI might look quite different in a few years. Serverless architecture is the hot trend now, but something else will eventually overtake it. No doubt, cybercriminals will keep surprising us with new attacks. Trends evolve, but underlying fundamentals — like efficiency, the need for collaboration, the value of consistency and the need to adapt — will always be what drives business decisions. For the API industry, the key to keeping up with trends without sacrificing fundamentals is to take a developer-centric approach. Developers will always create the core value of your APIs. 


The targeted approach to cloud & data - CIOs' need for ROI gains

AI and DaaS are part of the pool of technologies that Pacetti also draws on, and the company also uses AI provided by Microsoft, both with ChatGPT and Copilot. Plus, AI has been integrated into the e-commerce site to support product research and recommendations. But there’s an even more essential area for Pacetti.“With the end of third-party cookies, AI is now essential to exploit the little data we can capture from the internet user browsing who accept tracking,” he says. “We use Google’s GA4 to compensate for missing analytics data, for example, by exploiting data from technical cookies.” ... CIOs discuss sales targets with CEOs and the board, cementing the IT and business bond. But another even more innovative aspect is to not only make IT a driver of revenues, but also have it measure IT with business indicators. This is a form of advanced convergence achieved by following specific methodologies. Sondrio People’s Bank (BPS), for example, adopted business relationship management, which deals with translating requests from operational functions to IT and, vice versa, bringing IT into operational functions. BPS also adopts proactive thinking, a risk-based framework for strategic alignment and compliance with business objectives. 


Hidden Threats Lurk in Outdated Java

How important are security updates? After all, Java is now nearly 30 years old; haven’t we eliminated all the vulnerabilities by now? Sadly not, and realistically, that will never happen. OpenJDK contains 7.5 million lines of code and relies on many external libraries, all of which can be subject to undiscovered vulnerabilities. ... Since Oracle changed its distributions and licensing, there have been 22 updates. Of these, six PSUs required a modification and new release to address a regression that had been introduced. The time to create the new update has varied from just under two weeks to over five weeks. At no time have any of the CPUs been affected like this. Access to a CPU is essential to maintain the maximum level of security for your applications. Since all free binary distributions of OpenJDK only provide the PSU version, some users may consider a couple of weeks before being able to deploy as an acceptable risk. ... When an update to the JDK is released, all vulnerabilities addressed are disclosed in the release notes. Bad actors now have information enabling them to try and find ways to exploit unpatched applications.


How to defend Microsoft networks from adversary-in-the-middle attacks

Depending on the impact of the attack, start the cleanup process. Start by forcing a password change on the user account, ensuring that you have revoked all tokens to block the attacker’s fake credentials. If the consequences of the attack were severe, consider disabling the user’s primary account and setting up a new temporary account as you investigate the extent of the intrusion. You may even consider quarantining the user’s devices and potentially taking forensic-level backups of workstations if you are unsure of the original source of the intrusion so you can best investigate. Next review all app registrations, changes to service principals, enterprise apps, and anything else the user may have changed or impacted since the time the intrusion was noted. You’ll want to do a deep investigation into the mailbox’s access and permissions. Mandiant has a PowerShell-based script that can assist you in investigating the impact of the intrusion “This repository contains a PowerShell module for detecting artifacts that may be indicators of UNC2452 and other threat actor activity,” Mandiant notes. “Some indicators are ‘high-fidelity’ indicators of compromise, while other artifacts are so-called ‘dual-use’ artifacts.”



Quote for the day:

"To think creatively, we must be able to look afresh to at what we normally take for granted." -- George Kneller

Daily Tech Digest - November 13, 2024

In response to current digital transformation demands, organizations are integrating emerging technologies at an unprecedented rate. Despite their numerous benefits, securing these technologies is challenging for technology leaders. The white paper identified more than 200 critical and emerging technologies reshaping the digital ecosystem. Beyond AI and IoT, technologies such as blockchain, biotechnology and quantum computing are rising on the hype cycle, introducing new cybersecurity risks. ... Quantum computing, while promising breakthrough computational power, presents grave cybersecurity risks. It threatens to break current encryption standards, and quantum computers can potentially decrypt data collected now for future access. "The threat of quantum computing underscores the need for quantum-resistant cryptographic solutions to secure our digital future," the white paper stated. ... The cybersecurity industry faces a critical shortage of skilled professionals capable of managing emerging technology security. Cybersecurity Ventures projected a shortfall of 3.5 million cybersecurity professionals by 2025. Gartner predicted this skills gap would cause more than 50% of significant incidents by 2025. 


Do You Need a Solution or Enterprise Architect?

A Solution Architect is more like a surgeon who operates on someone to fix a problem, and the patient returns to normal life in a short time. An Enterprise Architect is more like an internal medicine specialist who treats a patient with a chronic illness over a number of years to improve the person’s quality of life. ... Architects are most successful when they help projects to succeed. Commonality of process and technology can be beneficial for an organization. But once architects are merely policing projects and rejecting aspects based on strict criteria, they lose the ability to positively influence the initiatives. Solution alignment is best achieved through working collaboratively with projects early to convince them of the advantages of various design choices. The first deliverable many architecture teams produce is what I call the “red/yellow/green list”. You’ve all seen these. Each technology classification is listed down the page – for example: server type, operating system, network software, database technology, and programming language. Three “colour” columns follow across the page. “Red” items are forbidden to be used by new projects. Although some legacy applications may still use them, they need to be phased out. “Yellow” items can be used under certain circumstances, but must be pre-approved by some kind of review committee. 


DataRobot launches Enterprise AI Suite to bridge gap between AI development and business value

The agentic AI approach is designed to help organizations handle complex business queries and workflows. The system employs specialist agents that work together to solve multi-faceted business problems. This approach is particularly valuable for organizations dealing with complex data environments and multiple business systems. “You ask a question to your agentic workflow, it breaks up the questions into a set of more specific questions, and then it routes them to agents which are specialists in various different areas,” Saha explained. For instance, a business analyst’s question about revenue might be routed to multiple specialized agents – one handling SQL queries, another using Python – before combining results into a comprehensive response. ... “We have put together a lot of instrumentation which lets people visually understand, for example, if you have a lot of clustering of data in the vector database, you can get a spurious answer,” Saha said. “You would be able to see that, if you see your questions are landing in areas where you don’t have enough information.” This observability extends to the platform’s governance capabilities, with real-time monitoring and intervention features. 


Using AI for DevOps: What Developers and Ops Need To Know

“AI can be incredibly powerful in DevOps when it’s implemented with a clear framework that makes it easy for developers to do the right thing and hard for them to do the wrong thing,” says Durkin. “Making it easy to do the right thing starts with standardizing templates and policies to streamline workflows. Create templates and enforce policies that support easy, repeatable integration of AI tools. By establishing policies that automate security and compliance checks, AI tools can operate within these boundaries, providing valuable support without compromising standards. This approach simplifies adoption and makes it harder to skip essential steps, reinforcing best practices across teams.” ... While having a well-considered strategy in place before embracing AI and DevOps is a must, Durkin and Govrin both offered up some additional tips and advice for getting AI tools and technologies to integrate with DevOps ambitions more easily. “In enterprise environments, deploying AI applications locally can significantly improve adoption and integration,” said Govrin. “Unlike consumer apps, enterprise AI benefits greatly from self-hosted setups, where solutions like local inference, support for self-hosted models and edge inferencing play a key role. These methods keep data secure and mitigate risks associated with data transfer across public clouds.”


The CISO paradox: With great responsibility comes little or no power

The absence of command makes cybersecurity decision-making a tedious and often frustrating process for CISOs. They are expected to move fast, to anticipate and address security issues before they become realized. But without command, they’re stuck in a cycle of “selling” the importance of security investments, waiting for approvals, and relying on others to prioritize those investments. This constant need for buy-in slows down response times and creates opportunities for something bad to happen. In cybersecurity, where timing is everything, these delays can be costly. Beyond timing, the concept of command is critical for strategic alignment and empowerment. In organizations where the CISO lacks true command, they’re forced to operate reactively rather than proactively. ... If organizations want to truly protect themselves, they need to recognize that CISOs require true command. The most effective CISOs are those who can operate with full authority over their domain, free from constant internal roadblocks. As companies consider how best to secure their data, they should ask themselves whether they are genuinely setting their CISOs up for success. Are they empowering them with the resources, authority, and autonomy to act? Or are they merely assigning a high-stakes responsibility without the power to fulfill it?


Harnessing SaaS to elevate your digital transformation journey

While SaaS provides the infrastructure, AI is the catalyst that powers digital transformation at scale. Companies are increasingly adopting AI-driven SaaS platforms to streamline workflows, automate tasks, and make data-driven decisions. In the B2B SaaS sector, this combination is revolutionising how businesses operate, helping them personalize customer interactions, predict outcomes, and optimize operations. ... In manufacturing, AI optimizes supply chain management, reducing waste and increasing productivity. In the finance sector, AI-driven SaaS automates risk assessment, improving decision-making and reducing operational costs. The benefits of adopting AI and SaaS are clear: enhanced customer experience, streamlined operations, and the ability to innovate faster than ever before. Companies that fail to integrate these technologies risk falling behind as competitors capitalize on these advancements to deliver superior products and services. As businesses continue to adopt SaaS and AI-driven solutions, the future of digital transformation looks promising. Companies are no longer just thinking about automating processes or improving efficiency, they are investing in technologies that will help them shape the future of their industries. 


Tackling ransomware without banning ransom payments

Despite these somewhat muddied waters, the correct response to ransomware attacks is clear: paying demands should almost always be a last resort. The only exception should be where there is a risk to life. Paying because it’s easy, costs less and causes less disruption to the business is not a good enough reason to pay, regardless of whether it’s the business handing cashing out or an insurer. However, while a step in the right direction, totally banning ransom payments addresses only one form of attack and feels a bit like a ‘whack-a-mole’ strategy. It may ease the rise in attacks for a short while, but attackers will inevitably switch tactics, to compromising business email perhaps, or something we’ve not even heard of yet. So, what else can be done to slow the rise in ransomware attacks? Well, we can consider a few options, such as closing vulnerability trading brokers and regulating cryptocurrency transactions. To pick on the latter as an example, most cybercrime monetizes through cryptocurrency, so rather than simply banning payments, it could be a better option to regulate the crypto industry and flow of money. Alongside this kind of regulatory change, governments could also consider moving the decision of whether to pay or not to an independent body. 


CISOs in 2025: Balancing security, compliance, and accountability

The scope of the CISO role has expanded significantly over the past 10-15 years, and has moved from mainly technical oversight to strategic leadership, risk management, and regulatory compliance. The constant pressure to prevent breaches and manage incidents can lead to high stress and burnout, making the role less appealing. This also means that modern CISOs must possess a blend of technical expertise, strategic thinking, and strong interpersonal skills. The requirement for such a diverse skill set can limit the pool of qualified candidates, as not all cybersecurity professionals have the necessary combination of skills. ... CISOs will need to be able to effectively communicate complex cybersecurity issues to non-technical board members and executives. This involves translating technical jargon into business language, and clearly articulating the impact of cybersecurity risks on the organization’s overall business strategy. And as cybersecurity becomes integral to business strategy, CISOs must be able to think beyond immediate threats, and focus on long-term strategic planning. This includes understanding how cybersecurity initiatives align with business goals and contribute to competitive advantage.


Emergence of Preemptive Cyber Defense: The Key to Defusing Sophisticated Attacks

The frequency of attacks is only part of the problem. Perhaps the biggest concern is the sophistication of incidents. Right now, cybercriminals are using everything from AI and machine learning to polymorphic malware coupled with sophisticated psychological tactics that play off of breaking world events and geopolitical tension. ... The clear limitations of these reactive systems have many businesses looking to shift away from the “one-size-fits-all” approach to more dynamic options. ... With redundancy, security, and resiliency in mind, many companies are following the lead of government agencies and diversifying their cybersecurity investments across multiple providers. This includes the option of a preemptive cyber defense solution, which, rather than relying on a single offering, blends in three — a triad that addresses the complexities of modern cybersecurity challenges. ... The preemptive cyber defense triad offers businesses the ultimate protection—a security ecosystem where the attack surface is constantly changing (AMTD), the security controls are always optimized (ASCA), and the overall threat exposure is continuously managed and minimized (CTEM).


Insurance Firm Introduces Liability Coverage for CISOs

“CISOs are the front line of defense against cyber threats, yet their role may leave them exposed to personal liabilities – particularly in light of the Securities and Exchange Commission’s (SEC) new cyber disclosure rules,” Nick Economidis, senior vice president of eRisk at Crum and Forster, said in a statement. “Our CISO Professional Liability Insurance is designed to bridge that gap, providing an essential safety net by offering CISOs the protection they need to perform their jobs with confidence.” ... The new insurance program by the Morristown, New Jersey-based law firm comes in the wake of charges against software maker SolarWinds and its CISO, Tim Brown, being dismissed by a federal court judge. The charges were made in connection with the massive software supply chain attack in 2020 by a threat group supported by Russia’s foreign intelligence services. ... “As personal liability risks for CISOs continue to evolve, the availability and scope of D&O insurance will remain a critical factor in recruiting and retaining top cybersecurity talent,” Fehling wrote. “Companies that offer robust insurance protection may gain a competitive advantage in the tight market for skilled security leaders.” 



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - November 12, 2024

Researchers Focus In On ‘Lightcone Bound’ To Develop An Efficiency Benchmark For Quantum Computers

The researchers formulated this bound by first reinterpreting the quantum circuit mapping challenge through quantum information theory. They focused on the SWAP “uncomplexity,” the lowest number of SWAP operations needed, which they determined using graph theory and information geometry. By representing qubit interactions as density matrices, they applied concepts from network science to simplify circuit interactions. To establish the bound, in an interesting twist, the team employed a Penrose diagram — a tool from theoretical physics typically used to depict spacetime geometries — to visualize the paths required for minimal SWAP-gate application. They then compared their model against a brute-force method and IBM’s Qiskit compiler, with consistent results affirming that their bound offers a practical minimum SWAP requirement for near-term quantum circuits. The researchers acknowledge the lightcone model has some limitations that could be the focus of future work. For example, it assumes ideal conditions, such as a noiseless processor and indefinite parallelization, conditions not yet achievable with current quantum technology. The model also does not account for single-qubit gate interactions, focusing only on two-qubit operations, which limits its direct applicability for certain quantum circuits.


Evaluating your organization’s application risk management journey

One way CISOs can articulate application risk in financial terms is by linking security improvement efforts to measurable outcomes, like cost savings and reduced risk exposure. This means quantifying the potential financial fallout from security incidents and showing how preventative measures mitigate these costs. CISOs need to equip their teams with tools that will help them protect their business in the short and long term. A study we commissioned with Forrester found that putting application security measures in place could save average organization millions in terms of avoided breach costs. ... To keep application risk management a dynamic, continuous process, CISOs integrate security into every stage of software development. Instead of relying on periodic assessments, organisations should implement real-time risk analysis, continuous monitoring, and feedback mechanisms to enable teams to address vulnerabilities promptly as they arise, rather than waiting for scheduled evaluations. Incorporating automation can also play a key role in streamlining this process, enabling quicker remediation of identified risks. Building on this, creating a security-first mindset across the organisation – through training and clear communication – ensures risk management adapts to new threats, supporting both innovation and compliance.


How a Second Trump Presidency Could Shape the Data Center Industry

“We anticipate that the incoming administration will have a keen focus on AI and our nation’s ability to be the global leader in the space,” Andy Cvengros, managing director, co-lead of US data center markets for JLL, told Data Center Knowledge. He said to do that, the industry will need to solve the transmission delivery crisis and continue to increase generation capacity rapidly. This may include reactivating decommissioned coal and nuclear power plants, as well as commissioning more of them. “We also anticipate that state and federal governments will become much more active in enabling the utilities to proactively expand substations, procure long lead items and support key submarket expansion through planned developments,” Cvengros said. ... Despite the federal government’s likely hands-off approach, Harvey said he believes large corporations might support consistent, global standards – especially since European regulations are far stricter. “US companies would prefer a unified regulatory framework to avoid navigating a complex patchwork of rules across different regions,” he said. Still, Europe’s stronger regulatory stance on renewable power might lead some companies to prioritize US-based expansions, where subsidies and fewer regulations make operations more economically feasible.


Data Breaches are a Dime a Dozen: It’s Time for a New Cybersecurity Paradigm

The modern-day ‘stack’ includes many disparate technology layers—from physical and virtual servers to containers, Kubernetes clusters, DevOps dashboards, IoT, mobile platforms, cloud provider accounts, and, more recently, large language models for GenAI. This has created the perfect storm for threat actors, who are targeting the access and identity silos that significantly broaden the attack surface. The sheer volume of weekly breaches reported in the press underscores the importance of protecting the whole stack with Zero Trust principles. Too often, we see bad actors exploiting some long-lived, stale privilege that allows them to persist on a network and pivot to the part of a company’s infrastructure that houses the most sensitive data. ... Zero Trust access for modern infrastructure benefits from being coupled with a unified access mechanism that acts as a front-end to all the disparate infrastructure access protocols – a single control point for authentication and authorization. This provides visibility, auditing, enforcement of policies, and compliance with regulations, all in one place. These solutions already exist on the market, deployed by security-minded organizations. However, adoption is still in early days. 


AI’s math problem: FrontierMath benchmark shows how far technology still has to go

Mathematics, especially at the research level, is a unique domain for testing AI. Unlike natural language or image recognition, math requires precise, logical thinking, often over many steps. Each step in a proof or solution builds on the one before it, meaning that a single error can render the entire solution incorrect. “Mathematics offers a uniquely suitable sandbox for evaluating complex reasoning,” Epoch AI posted on X.com. “It requires creativity and extended chains of precise logic—often involving intricate proofs—that must be meticulously planned and executed, yet allows for objective verification of results.” This makes math an ideal testbed for AI’s reasoning capabilities. It’s not enough for the system to generate an answer—it has to understand the structure of the problem and navigate through multiple layers of logic to arrive at the correct solution. And unlike other domains, where evaluation can be subjective or noisy, math provides a clean, verifiable standard: either the problem is solved or it isn’t. But even with access to tools like Python, which allows AI models to write and run code to test hypotheses and verify intermediate results, the top models are still falling short.


Can Wasm replace containers?

One area where Wasm shines is edge computing. Here, Wasm’s lightweight, sandboxed nature makes it especially intriguing. “We need software isolation on the edge, but containers consume too many resources,” says Michael J. Yuan, founder of Second State and the Cloud Native Computing Foundation’s WasmEdge project. “Wasm can be used to isolate and manage software where containers are ‘too heavy.’” Whereas containers take up megabytes or gigabytes, Wasm modules take mere kilobytes or megabytes. Compared to containers, a .wasm file is smaller and agnostic to the runtime, notes Bailey Hayes, CTO of Cosmonic. “Wasm’s portability allows workloads to run across heterogeneous environments, such as cloud, edge, or even resource-constrained devices.” ... Wasm has a clear role in performance-critical workloads, including serverless functions and certain AI applications. “There are definitive applications where Wasm will be the first choice or be chosen over containers,” says Luke Wagner, distinguished engineer at Fastly, who notes that Wasm brings cost-savings and cold-start improvements to serverless-style workloads. “Wasm will be attractive for enterprises that don’t want to be locked into the current set of proprietary serverless offerings.”


Authentication Actions Boost Security and Customer Experience

Authentication actions can be used as effective tools for addressing the complex access scenarios organizations must manage and secure. They can be added to workflows to implement convenience and security measures after users have successfully proven their identity during the login process. ... When using authentication actions, first take some time to fully map out the customer journey you want to achieve, and most importantly, all of the possible variations of this journey. Think of your authentication requirements as a flowchart that you control. Start by mapping out your requirements for different users and how you want them to sign up and authenticate. Understand the trade-off between security and user experience. Consider using actions to enable a frictionless initial login with a simple authentication method. You can use step-up authentication as a technique that increases the level of assurance when the user needs to perform higher-privilege operations. You can also use actions to implement dynamic behavior per user. For instance, you can use an action that captures an identifier like an email to identify the user. Then you can use another action to look up the user’s preferred authentication method or methods to give each user a personalized experience.


How Businesses use Modern Development Platforms to Streamline Automation

APIs are essential for streamlining data flows between different systems. They enable various software applications to communicate with each other, automating data exchange and reducing manual input. For instance, integrating an API between a customer relationship management (CRM) system and an email marketing platform can automatically sync contact information and campaign data. This not only saves time, but also minimizes errors that can occur with manual data entry. ... Workflow automation tools are designed to streamline business processes by automating repetitive steps and ensuring smooth transitions between tasks. These tools help businesses design and manage workflows, automate task assignments, and monitor progress. For example, tools like Asana and Monday.com allow teams to automate task notifications, approvals, and status updates. By automating these processes, businesses can improve collaboration and reduce the risk of missed deadlines or overlooked tasks. Workflow automation tools also provide valuable insights into process performance, enabling companies to identify bottlenecks and optimize their operations. This leads to more efficient workflows and better resource management.

“Micromanagement is one of the fastest ways to destroy IT culture,” says Jay Ferro, EVP and chief information, technology, and product officer at Clario. “When CIOs don’t trust their teams to make decisions or constantly hover over every detail, it stifles creativity and innovation. High-performing professionals crave autonomy; if they feel suffocated by micromanagement, they’ll either disengage or leave for an environment where they’re empowered to do their best work.” ... One of the most challenging issues facing transformational CIOs is the overwhelming demand to take on more initiatives, deliver to greater scope, or accept challenging deadlines. Overcommitting to what IT can reasonably accomplish is an issue, but what kills IT culture is when the CIO leaves program leaders defenseless when stakeholders are frustrated or when executive detractors roadblock progress. “It demoralizes IT when there is a lack of direction, no IT strategy, and the CIO says yes to everything the business asks for regardless of whether the IT team has the capacity,” says Martin Davis, managing partner at Dunelm Associates. “But it totally kills IT culture when the CIO doesn’t shield teams from angry or disappointed business senior management and stakeholders.”


Understanding Data Governance Maturity: An In-Depth Exploration

Maturity in data governance is typically assessed through various models that measure different aspects of data management such as data quality and compliance and examines processes for managing data’s context (metadata) and its security. Maturity models provide a structured way to evaluate where an organization stands and how it can improve for a given function. ... Many maturity models are complex and may require significant time and resources to implement. Organizations need to ensure they have the capacity to effectively handle the complexity involved in using these models. Additionally, some data governance maturity models do not address the relevant related data management functions, such as metadata management, data quality management, or data security to a sufficient level of detail for some organizations. ... Implementing changes based on maturity model assessments can face resistance; organizational culture may not accept the views discovered in an assessment. Adopting and sustaining effective change management strategies and choosing a maturity model carefully can help overcome resistance and ensure successful implementation.



Quote for the day:

"Whenever you see a successful person, you only see the public glories, never the private sacrifices to reach them." -- Vaibhav Shah

Daily Tech Digest - November 11, 2024

What if robots learned the same way genAI chatbots do?

To unlock further potential in robotic learning, training objectives beyond supervised learning, such as self-supervised or unsupervised learning, should be investigated. It is important to grow the datasets with diverse, high-quality data. This could include teleoperation data, simulations, human videos, and deployed robot data. Researchers need to learn the optimal blend of data types for higher HPT success rates. Researchers and later industry will need to create standardized virtual testing grounds to facilitate the comparison of different robot models. ... Think of it as giving robots more demanding, more realistic challenges to solve. Scientists are also looking into how the amount of data, the size of the robot’s “brain” (model), and its performance are connected. Understanding this relationship could help us build better robots more efficiently. Another exciting area is teaching robots to understand different types of information. This could include 3D maps of their surroundings, touch sensors, and even data from human actions. By combining all these different inputs, robots could learn to understand their environment more like humans do. All these research ideas aim to create smarter, more versatile robots that can handle a wider range of tasks in the real world. 


Chief AI Officers: Should Every Business Have One?

The CAIO's mandate extends beyond technical oversight. These leaders define their company's AI vision, bring solutions to market and establish ethical governance frameworks, Laqab said. Their systems address data privacy protection and eliminate bias in AI implementations while aligning with organizational objectives. At Ascendion, every data scientist and machine learning engineer develops solutions under stringent guidelines, which Laqab describes as "prioritizing rigorous planning and transparency" - an approach critical in regulated sectors such as healthcare and finance where trustworthy AI proves essential. ... CAIOs work in strategic partnership with CIOs and CTOs to integrate AI capabilities into organizational systems. Laqab underscored the importance of unified planning where AI leaders and technology teams implement initiatives without duplicating or straining resources. This approach builds cross-functional momentum, enabling CAIOs to embed AI in broader processes while maximizing existing IT investments and supporting overall strategy. ... "Just as the CDO emerged to manage data, the CAIO is essential for navigating the complex landscape of AI technologies. 


New research reveals AI adoption on rise, but challenges remain in data governance & ROI realisation

Commenting on the survey, Noshin Kagalwalla, Vice President & Managing Director, SAS India, said: “Indian companies are undoubtedly making progress in AI adoption, but significant work remains. The challenge lies not only in deploying AI but also in a way that it is trustworthy, scalable, and aligned with long-term business objectives. Strategic investments in data governance and AI infrastructure will be crucial to driving sustainable AI performance across industries in India.” “The disparity in target outcomes between AI Leaders and AI Followers demonstrates a lack of clear strategy and roadmap. Where AI Followers are focused on short-term, productivity-based results, AI Leaders have moved beyond these to more complex functional and industry use cases,” said Shukri Dabaghi, Senior Vice President, Asia Pacific and EMEA Emerging at SAS. “As businesses look to capitalise on the transformative potential of AI, it’s important for business leaders to learn from the differences between an AI Leader and an AI Follower. Avoiding a ‘gold rush’ way of thinking ensures long-term transformation is built on trustworthy AI and capabilities in data, processes and skills,” said Mr. Dabaghi.


4 reasons why veterans thrive as cybersecurity professionals

Through their military experience, veterans learn to combat the most sophisticated adversaries in existence and adopt an apex attacker’s perspective. Many of today’s malicious actors are not lone individuals wearing a hoodie and operating from a cybercafe; they’re highly skilled, well-funded nation-state actors or part of a larger cybercrime group that operates like a corporate organization. Dealing with such high-level adversaries requires defenders who are trained specifically to combat their techniques. Many veterans are trained extensively in red team attack simulations, in which they pose as an attacker and attempt to breach an organization’s systems to assess vulnerabilities and boost the organization’s security posture. This training is used to combat nation-state attackers, with military members engaging in monthly or multi-year attack simulations. ... Maintaining security requires a distinct mentality where your approach meets the dedication of the threat actor trying to hack into your system. Veterans can become skilled in specialized areas like hunting for advanced adversaries within security systems. They know that adversaries can be relentless in their attempts and can be adept in providing relentless defense.


Responsible AI starts with transparency

Today, most foundational AI models have been trained on data scrubbed from the public internet, so it’s essentially impossible for users to understand the dataset at web scale. Even the model providers themselves aren’t always able to fully understand the composition of their own training data when it’s pulled from so many different sources across the entire internet. Even if they were, they wouldn’t be required to disclose that information to model users. This lack of data transparency is one reason that using publicly available AI models may not be appropriate for enterprises. However, there are ways to work around this. For instance, you can build proxy models, which are simple models used to approximate the results of your more complex AI models. Building a good proxy model requires you to balance the tradeoff between simplicity and accuracy. Nevertheless, even a very simple approximated model can help you understand how each feature of a model impacts its predictions. ... When it comes to building trust, it’s impossible to fully separate your AI models from the humans who use them. Humans naturally want to have some control over the tools they use; if you can’t give employees that sense of control, it’s unlikely they’ll continue to use AI.


Combating Cybercrime: What to Expect From Trump Presidency?

Trump is no stranger to combating cybercrime. His first administration updated the National Cyber Strategy for the first time in 15 years. "The administration will push to ensure that our federal departments and agencies have the necessary legal authorities and resources to combat transnational cybercriminal activity, including identifying and dismantling botnets, dark markets and other infrastructure used to enable cybercrime," it said. Especially where nation-state attacks are concerned, defending forward - disrupting malicious cyber activity at its source - has been U.S. military doctrine since 2018. But experts also see blemishes on Trump's cyber track record, including his axing the top cybersecurity coordinator role in the White House, weakening cyber diplomacy - a core strategy for tackling cybercrime safe havens - and firing the head of the Cybersecurity and Infrastructure Security Agency, which helps improve domestic resilience. Whether the U.S. continues its strategy of naming and shaming cybercriminals it can't reach, often in Russia, is unclear. Ian Thornton-Trump, a veteran CISO who formerly served with the Military Intelligence Branch of the Canadian Forces, predicts the administration could redirect resources to focus more on China and deemphasize the naming, shaming and disruption of Russian criminals' operations.


Transforming Enterprise Networks With AIOps: A New Era of Intelligent Connectivity

One of the primary benefits of AIOps is its ability to enhance intelligent network management. In today's complex network environments, optimizing performance and ensuring seamless connectivity in a continuously changing fabric is critical. AIOps provides insights across the various IT domains (e.g., application, security, infrastructure, etc.) that help networking professionals identify areas for improvement, automate routine tasks, and maintain optimal network performance. By leveraging AI-driven analytics, organizations can ensure that their networks are always running smoothly, reducing downtime and improving overall efficiency. ... AIOps is paving the way for the creation of autonomous networks that didn't materialize during the era of software-defined networking or intent-based networking. These initiatives claimed to create self-managing, self-healing networks that could adapt to changing conditions and demands with minimal human intervention, except a crucial element was missing: AIOps. AIOps highlights areas where automation can be implemented, allowing networks to respond dynamically to issues and changes in the environment. 


CIOs to spend ambitiously on AI in 2025 — and beyond

The big investments in generative AI may eventually rival traditional cloud investments but that does not mean top cloud providers — all of whom are top AI platforms providers — will suffer. Amazon Web Services, Microsoft Azure, and Google Cloud Platform are enabling the massive amount of gen AI experimentation and planned deployment of AI next year, IDC points out. ... “In the near term, most enterprises are focusing on automation and productivity use cases that can be implemented without fundamentally changing business processes,” McCarthy adds. “However, the higher value use cases involve new business models, which require widespread organizational change.” Stephen Crowley, senior advisor for S&L Ventures and former CIO of global technology solutions at Covetrus, still sees that future as a little way off. “Building the foundation is different from moving to production with AI apps. I think that will take longer,” he says. ... The risks of accidentally exposing sensitive corporate data or designing gen AI models that fly afield of their intended missions are also top of mind for Dairyland Power’s Melby, who is working with a Microsoft partner to deploy Copilot and Azure OpenAI capabilities to employees in a secure manner.


Building a workplace culture that supports mental well-being: A guide

Balancing productivity and employee well-being can be challenging but achievable. Creating a supportive work environment makes employees more likely to be engaged and productive. This approach not only benefits employees but also drives better business outcomes. ... To co-relate this model to mental health in the workplace, we can consider the inner foundational level as the basic needs and rights that must be met for employees, such as fair wages, job security, and a healthy work-life balance ... The inner circle or foundational level showcases the basic needs and rights of employees. Ensuring employees have fair wages, access to mental health resources, supportive management, and a healthy work-life balance. Avoid overwork, and reduce workplace stressors. The middle circle depicts evolved workplaces. A mental health mature workplace will invest in engaged employees, leaders who walk the talk, policies that ensure employees thrive, and psychological safety and the necessary infrastructure, sensitization and practice to support employee well-being and growth. The outer circle depicts a sustainable mental health positive culture that balances and is focused on achieving higher levels of maturity in culture and processes. 


7 reasons security breach sources remain unknown

As attacks become more sophisticated it can become more difficult to unpack the cause of problems, says Raj Samani, SVP and chief scientist at security firm Rapid7. “We must acknowledge that many threat groups take measures to obfuscate their tracks, invariably making any investigation more challenging,” he says. “However, this is often only part of the reason why identifying the source of the breach is so difficult.” Samani adds: “Whilst technologies will aid the investigation, the time spent retroactively reviewing such incidents often competes with the urgency of the next issue, or indeed, the demand to get the environment operational again.” Many breaches are detected long after they occur, and delays make it harder to identify root causes. Here, time is on the side of an attacker, with computer forensic capabilities fading over time as data is amended, overwritten, and deleted. “Hackers are always finding new ways to blend into regular network traffic, so even the best detection systems can end up playing a never-ending game of ‘whack-a-mole’ with threats,” says Peter Wood, CTO at Spectrum Search. “And while the systems might flag something suspicious, figuring out exactly where it started is another story altogether.”



Quote for the day:

“Creativity is thinking up new things. Innovation is doing new things.” -- Theodore Levitt