Daily Tech Digest - December 27, 2024

Software-Defined Vehicles: Onward and Upward

"SDV is about building efficient methodologies to develop, test and deploy software in a scalable way," he said. AWS, through initiatives such as The Connected Vehicle Systems Alliance and standardized protocols such as Vehicle Signal Specification, is helping OEMs standardize vehicle communication. This approach reduces the complexity of vehicle software and enables faster development cycles. BMW's virtualized infotainment system, built using AWS cloud services, is a use case of how standardization and cloud technology enable more efficient development. ... Gen AI, according to Marzani, is the next and most fascinating frontier for automotive innovation. AWS has already begun integrating AI into vehicle design and user experiences. It is helping OEMs develop in-car assistants that can provide real-time, context-aware information, such as interpreting warning signals or offering maintenance advice. But Marzani cautioned against deploying such systems without rigorous testing. "If an assistant misinterprets a warning and gives incorrect advice, the consequences could be severe. That's why we test these models in virtualized environments before deploying them in real-world scenarios." 


The End of Dashboard Frustration: AI Powers New Era of Analytics

Enterprises can tackle the workflow friction challenge by embedding analytics directly into users' existing applications. Most applications these days are delivered on a SaaS basis, which means a web browser is the primary interface for employees' daily workflow. With the assistance of a browser plug-in, keywords can be highlighted to show critical information about any business entity, from customer profiles to product details, making data instantly accessible within the user's natural workflow. There's no need to open another application and lose time on task switching — the data is automatically presented within the natural course of an employee's operations. To address varying levels of data expertise, enterprises can take a hybrid approach that combines the natural language capabilities of large language models (LLMs) with the precision of traditional BI tools. In this way, an AI-powered BI assistant can translate natural language queries into precise data analytics operations. Employees will no longer need to know how to form specific, technical queries to get the data they need. Instead, they can simply ask a bot using ordinary text, just as if they were interacting with a human being. 


The Intersection of AI and OSINT: Advanced Threats On The Horizon

Scammers and cybercriminals constantly monitor public information to collect insight on people, businesses and systems. They research social media profiles, public records, company websites, press releases, etc., to identify vulnerabilities and potential targets. What might seem like harmless information such as a job change, a location-tagged photograph, stories in media, online interests and affiliations can be pieced together to build a comprehensive profile of a target, enabling threat actors to launch targeted social engineering attacks. And it’s not just social media that threat actors are tracking and monitoring. They are known to research things like leaked credentials, IP addresses, bitcoin wallet addresses, exploitable assets such as open ports, vulnerabilities in websites, internet-exposed devices such as Internet of Things (IoT), servers and more. A range of OSINT tools are easily available to discover information about a company’s employees, assets and other confidential information. While OSINT offers significant benefits to cybercriminals, there is also a real challenge of collecting and analyzing publicly available data. Sometimes information is easy to find, sometimes extensive exercise is needed to uncover loopholes and buried information.


The Expanding Dark Web Toolkit Using AI to Fuel Modern Phishing Attacks

Phishing is no longer limited to simple social engineering approaches; it has grown into a complex, multi-layered attack vector that employs dark web tools, AI, and undetectable malware. The availability of phishing kits and advanced cyber tools are making it easier than ever for novices to develop their malicious capabilities. Stopping these attacks can be tricky, given how convincing the websites and emails can appear to users. However, organizations and individuals must be vigilant in their efforts and continue to use regular security awareness training to educate users, employees, partners, and clients on the evolving dangers. All users should be reminded to never give out sensitive credentials to emails and never respond to unfamiliar links, phone calls, or messages received. Using a zero-trust architecture for continuous verification is essential while also maintaining vigilance when visiting websites or social media apps. Additionally, modern threat detection tools employing AI and advanced machine learning can help to understand incoming threats and immediately flag them ahead of user involvement. The use of MFA and biometric verification has a critical role to play, as do regular software updates and immediate patching of servers or loopholes/vulnerabilities. 


Infrastructure as Code in 2024: Why It’s Still So Terrible

The problem, Siva wrote, is”when a developer decides to replace a manually managed storage bucket with a third-party service alternative, the corresponding IaC scripts must also be manually updated, which becomes cumbersome and error-prone as projects scale. The desync that occurs between the application and its runtime can lead to serious security implications, where resources are granted far more permissions than they require or are left rogue and forgotten.” He added, “Infrastructure from Code automates the bits that were previously manual in nature. Whenever an application changes, IfC can help provision resources and configurations that accurately reflect its runtime requirements, eliminating much of the manual work typically involved.” ... The open source work around OpenTofu may point the way forward out of this mess. Or at least that is the view of industry observer Kelsey Hightower, who likened the open sourcing of Terraform to the opening of technologies that made the Internet possible, making OpenTofu to be the "HTTP of the cloud," wrote Ohad Maislish, CEO and co-founder of env0. "For Terraform technology to achieve universal HTTP-like adoption, it had to outgrow its commercial origins," Maislish wrote. "In other words: Before it could belong to everyone, it needed to be owned by no one."


CISA mandates secure cloud baselines for US agencies

The directive prescribes actionable measures such as the adoption of secure baselines, automated compliance tooling, and integration with security monitoring systems. These steps are in line with modern security models aimed at strengthening the security of the new attack surface presented by SaaS applications. Cory Michal highlighted both the practicality and challenges of the directive: "The requirements are reasonable, as the directive focuses on practical, actionable measures like adopting secure baselines, automated compliance tooling, and integration with security monitoring systems. These are foundational steps that align with modern SaaS and cloud security models following the Identify, Protect, Detect and Respond methodology, allowing organizations to embrace and secure this new attack surface." However, Michal also pointed out significant hurdles, including deadlines, funding, and skillset shortages, that agencies may face in complying with the directive. Many agencies may lack the skilled personnel and financial resources necessary to implement and manage these security measures. "Deadlines, lack of funding and lack of adequate skillsets will be the main challenges in meeting these requirements.


Data protection challenges abound as volumes surge and threats evolve

Data security experts say CISOs can cope with these changes by understanding the nature of the shifting landscape, implementing foundational risk management strategies, and reaching for new tools that better protect data and quickly identify when adverse data events are underway. Although the advent of artificial intelligence increases data protection challenges, experts say AI can also help fill in some of the cracks in existing data protection programs. ... Experts say that what most CISOs should consider in running their data protection platforms is a wide range of complex security strategies that involve identifying and classifying information based on its sensitivity, establishing access controls and encryption mechanisms, implementing proper authentication and authorization processes, adopting secure storage and transmission methods and continuously monitoring and detecting potential security incidents. ... However, before considering these highly involved efforts, CISOs must first identify where data exists within their organizations, which is no easy feat. “Discover all your data or discover the data in the important locations,” Benjamin says. “You’ll never be able to discover everything but discover the data in the important locations, whether in your office, in G Suite, in your cloud, in your HR systems, and so on. Discover the important data.”


How to Create an Enterprise-Wide Cybersecurity Culture

Cybersecurity culture planning requires a cross-organizational effort. While the CISO or CSO typically leads, the tone must be set from the top with active board involvement, Sullivan says. "The C-suite should integrate cybersecurity into business strategy, and key stakeholders from IT, legal, HR, finance, and operations must collaborate to address an ever-evolving threat landscape." She adds that engaging employees at all levels through continuous education will ensure that cybersecurity becomes everyone's responsibility. ... A big mistake many organizations make is treating cybersecurity as a separate initiative that's disconnected from the organization’s core mission, Sullivan says. "Cybersecurity should be recognized as a critical business imperative that requires board and C-suite-level attention and strategic oversight." Creating a healthy network security culture is an ongoing process that involves continuous learning, adaptation, and collaboration among teams, Tadmor says. This requires more thought than just setting policies -- it's also about integrating security practices into daily routines and workflows. "Regular training, open communication, and real-time monitoring are essential components to keep the culture alive and responsive to emerging network threats," he says.


What is serverless? Serverless computing explained

Serverless computing is an execution model for the cloud in which a cloud provider dynamically allocates only the compute resources and storage needed to execute a particular piece of code. Naturally, there are still servers involved, but the provider manages the provisioning and maintenance. ... Developers can focus on the business goals of the code they write, rather than on infrastructure questions. This simplifies and speeds up the development process and improves developer productivity. Organizations only pay for the compute resources they use in a very granular fashion, rather than buying physical hardware or renting cloud instances that mostly sit idle. That latter point is of particular benefit to event-driven applications that are idle much of the time but under certain conditions must handle many event requests at once. ... Serverless functions also must be tailored to the specific platform they run on. This can result in vendor lock-in and less flexibility. Although there are open source options available, the serverless market is dominated by the big three commercial cloud providers. Development teams often end up using tooling from their serverless vendor, which makes it hard to switch. 


How In-Person Banking Can Survive the Digital Age

Today’s consumer quite rightly expects banks to not merely support environmental and sustainable causes but to actively be using those principles within their work. Pioneers like The Co-operative Bank in the UK have been asking us to help them in this area for more than two decades, and the approach is spreading worldwide: We recently helped Saudi National Bank adopt best sustainability practice. There is much more that banks can do to integrate their digital and physical experiences in branch in the way that retailers and casual dining spaces are now doing. Indeed, banks could look more closely to hospitality for inspiration in many areas. ... There’s a slightly ironic conundrum that banks and credit unions would do well to consider: Banks don’t want branches, but they need them; customers don’t need branches, but they want them. Unlocking the potential and value here is about maintaining physical points of presence but re-inventing their role. They need to become venues not for ‘lower order’ basic transactional activities, as dominated their activity in the past; but for ‘higher order’ financial life support for communities and individuals. It’s the latter that explains why customers want branches even when there’s no apparent functional need.



Quote for the day:

"The only way to discover the limits of the possible is to go beyond them into the impossible." -- Arthur C. Clarke

Daily Tech Digest - December 26, 2024

Best Practices for Managing Hybrid Cloud Data Governance

Kausik Chaudhuri, CIO of Lemongrass, explains monitoring in hybrid-cloud environments requires a holistic approach that combines strategies, tools, and expertise. “To start, a unified monitoring platform that integrates data from on-premises and multiple cloud environments is essential for seamless visibility,” he says. End-to-end observability enables teams to understand the interactions between applications, infrastructure, and user experience, making troubleshooting more efficient. ... Integrating legacy systems with modern data governance solutions involves several steps. Modern data governance systems, such as data catalogs, work best when fueled with metadata provided by a range of systems. “However, this metadata is often absent or limited in scope within legacy systems,” says Elsberry. Therefore, an effort needs to be made to create and provide the necessary metadata in legacy systems to incorporate them into data catalogs. Elsberry notes a common blocking issue is the lack of REST API integration. Modern data governance and management solutions typically have an API-first approach, so enabling REST API capabilities in legacy systems can facilitate integration. “Gradually updating legacy systems to support modern data governance requirements is also essential,” he says.


These Founders Are Using AI to Expose and Eliminate Security Risks in Smart Contracts

The vulnerabilities lurking in smart contracts are well-known but often underestimated. “Some of the most common issues include Hidden Mint functions, where attackers inflate token supply, or Hidden Balance Updates, which allow arbitrary adjustments to user balances,” O’Connor says. These aren’t isolated risks—they happen far too frequently across the ecosystem. ... “AI allows us to analyze huge datasets, identify patterns, and catch anomalies that might indicate vulnerabilities,” O’Connor explains. Machine learning models, for instance, can flag issues like reentrancy attacks, unchecked external calls, or manipulation of minting functions—and they do it in real-time. “What sets AI apart is its ability to work with bytecode,” he adds. “Almost all smart contracts are deployed as bytecode, not human-readable code. Without advanced tools, you’re essentially flying blind.” ... As blockchain matures, smart contract security is no longer the sole concern of developers. It’s an industry-wide challenge that impacts everyone, from individual users to large enterprises. DeFi platforms increasingly rely on automated tools to monitor contracts and secure user funds. Centralized exchanges like Binance and Coinbase assess token safety before listing new assets. 


Three best change management practices to take on board in 2025

For change management to truly succeed, companies need to move from being change-resistant to change-ready. This means building up "change muscles" -- helping teams become adaptable and comfortable with change over the long term. For Mel Burke, VP of US operations at Grayce, the key to successful change is speaking to both the "head" and the "heart" of your stakeholders. Involve employees in the change process by giving them a voice and the ability to shape it as it happens. ... Change management works best when you focus on the biggest risks first and reduce the chance of major disruptions. Dedman calls this strategy "change enablement," where change initiatives are evaluated and scored on critical factors like team expertise, system dependencies, and potential customer impact. High-scorers get marked red for immediate attention, while lower-risk ones stay green for routine monitoring to keep the process focused and efficient. ... Peter Wood, CTO of Spectrum Search, swears by creating a "success signals framework" that combines data-driven metrics with culture-focused indicators. "System uptime and user adoption rates are crucial," he notes, "but so are team satisfaction surveys and employee retention 12-18 months post-change." 


Corporate Data Governance: The Cornerstone of Successful Digital Transformation

While traditional data governance focuses on the continuous and tactical management of data assets – ensuring data quality, consistency, and security – corporate data governance elevates this practice by integrating it with the organization’s overall governance framework and strategic objectives. It ensures that data management practices are not operating in silos but are harmoniously aligned and integrated with business goals, regulatory requirements, and ethical standards. In essence, corporate data governance acts as a bridge between data management and corporate strategy, ensuring that every data-related activity contributes to the organization’s mission and objectives. ... In the digital age, data is a critical asset that can drive innovation, efficiency, and competitive advantage. However, without proper governance, data initiatives can become disjointed, risky, and misaligned with corporate goals. Corporate data governance ensures that data management practices are strategically integrated with the organization’s mission, enabling businesses to leverage data confidently and effectively. By focusing on alignment, organizations can make better decisions, respond swiftly to market changes, and build stronger relationships with customers. 


What is an IT consultant? Roles, types, salaries, and how to become one

Because technology is continuously changing, IT consultants can provide clients with the latest information about new technologies as they become available, recommending implementation strategies based on their clients’ needs. As a result, for IT consultants, keeping the pulse of the technology market is essential. “Being a successful IT consultant requires knowing how to walk in the shoes of your IT clients and their business leaders,” says Scott Buchholz, CTO of the government and public services sector practice at consulting firm Deloitte. A consultant’s job is to assess the whole situation, the challenges, and the opportunities at an organization, Buchholz says. As an outsider, the consultant can see things clients can’t. ... “We’re seeing the most in-demand types of consultants being those who specialize in cybersecurity and digital transformation, largely due to increased reliance on remote work and increased risk of cyberattacks,” he says. In addition, consultants with program management skills are valuable for supporting technology projects, assessing technology strategies, and helping organizations compare and make informed decisions about their technology investments, Farnsworth says.


Blockchain + AI: Decentralized Machine Learning Platforms Changing the Game

Tech giants with vast computing resources and proprietary datasets have long dominated traditional AI development. Companies like Google, Amazon, and Microsoft have maintained a virtual monopoly on advanced AI capabilities, creating a significant barrier to entry for smaller players and independent researchers. However, the introduction of blockchain technology and cryptocurrency incentives is rapidly changing this paradigm. Decentralized machine learning platforms leverage blockchain's distributed nature to create vast networks of computing power. These networks function like a global supercomputer, where participants can contribute their unused computing resources in exchange for cryptocurrency tokens. ... The technical architecture of these platforms typically consists of several key components. Smart contracts manage the distribution of computational tasks and token rewards, ensuring transparent and automatic execution of agreements between parties. Distributed storage solutions like IPFS (InterPlanetary File System) handle the massive datasets required for AI training, while blockchain networks maintain an immutable record of transactions and model provenance.


DDoS Attacks Surge as Africa Expands Its Digital Footprint

A larger attack surface, however, is not the only reason for the increased DDoS activity in Africa and the Middle East, Hummel says. "Geopolitical tensions in these regions are also fueling a surge in hacktivist activity as real-world political disputes spill over into the digital world," he says. "Unfortunately, hacktivists often target critical infrastructure like government services, utilities, and banks to cause maximum disruption." And DDoS attacks are by no means the only manifestation of the new threats that organizations in Africa are having to contend with as they broaden their digital footprint. ... Attacks on critical infrastructure and financially motived attacks by organized crime are other looming concerns. In the center's assessment, Africa's government networks and networks belonging to the military, banking, and telecom sectors are all vulnerable to disruptive cyberattacks. Exacerbating the concern is the relatively high potential for cyber incidents resulting from negligence and accidents. Organized crime gangs — the scourge of organizations in the US, Europe, and other parts of the world, present an emerging threat to organizations in Africa, the Center has assessed. 


Optimizing AI Workflows for Hybrid IT Environments

Hybrid IT offers flexibility by combining the scalability of the cloud with the control of on-premises resources, allowing companies to allocate their resources more precisely. However, this setup also introduces complexity. Managing data flow, ensuring security, and maintaining operational efficiency across such a blended environment can become an overwhelming task if not addressed strategically. To manage AI workflows effectively in this kind of setup, businesses must focus on harmonizing infrastructure and resources. ... Performance optimization is crucial when running AI workloads across hybrid environments. This requires real-time monitoring of both on-premises and cloud systems to identify bottlenecks and inefficiencies. Implementing performance management tools allows for end-to-end visibility of AI workflows, enabling teams to proactively address performance issues before they escalate. ... Scalability also supports agility, which is crucial for businesses that need to grow and iterate on AI models frequently. Cloud-based services, in particular, allow teams to experiment and test AI models without being constrained by on-premises hardware limitations. This flexibility is essential for staying competitive in fields where AI innovation happens rapidly.


The Cloud Back-Flip

Cloud repatriation is driven by various factors, including high cloud bills, hidden costs, complexity, data sovereignty, and the need for greater data control. In markets like India—and globally—these factors are all relevant today, points out Vishal Kamani – Cloud Business Head, Kyndryl India. “Currently, rising cloud costs and complexity are part of the ‘learning curve’ for enterprises transitioning to cloud operations.” ... While cloud repatriation is not an alien concept anymore, such reverse migration back to on-premises data centres is seen happening only in organisations that are technology-driven and have deep tech expertise, observes Gaurang Pandya, Director, Deloitte India. “This involves them focusing back on the basics of IT infrastructure which does need a high number of skilled employees. The major driver for such reverse migration is increasing cloud prices and performance requirements. In an era of edge computing and 5G, each end system has now been equipped with much more computing resources than it ever had. This increases their expectations from various service providers.” Money is a big reason too- especially when you don’t know where is it going.


Why Great Programmers fail at Engineering

Being a good programmer is about mastering the details — syntax, algorithms, and efficiency. But being a great engineer? That’s about seeing the bigger picture: understanding systems, designing for scale, collaborating with teams, and ultimately creating software that not only works but excels in the messy, ever-changing real world. ... Good programmers focus on mastering their tools — languages, libraries, and frameworks — and take pride in crafting solutions that are both functional and beautiful. They are the “builders” who bring ideas to life one line of code at a time. ... Software engineering requires a keen understanding of design principles and system architecture. Great code in a poorly designed system is like building a solid wall in a crumbling house — it doesn’t matter how good it looks if the foundation is flawed. Many programmers struggle to:Design systems for scalability and maintainability. Think in terms of trade-offs, such as performance vs. development speed. Plan for edge cases and future growth. Software engineering is as much about people as it is about code. Great engineers collaborate with teams, communicate ideas clearly, and balance stakeholder expectations. ... Programming success is often measured by how well the code runs, but engineering success is about how well the system solves a real-world problem.



Quote for the day:

"Ambition is the path to success. Persistence is the vehicle you arrive in." -- Bill Bradley

Daily Tech Digest - December 25, 2024

The promise and perils of synthetic data

Synthetic data is no panacea, however. It suffers from the same “garbage in, garbage out” problem as all AI. Models create synthetic data, and if the data used to train these models has biases and limitations, their outputs will be similarly tainted. For instance, groups poorly represented in the base data will be so in the synthetic data. “The problem is, you can only do so much,” Keyes said. “Say you only have 30 Black people in a dataset. Extrapolating out might help, but if those 30 people are all middle-class, or all light-skinned, that’s what the ‘representative’ data will all look like.” To this point, a 2023 study by researchers at Rice University and Stanford found that over-reliance on synthetic data during training can create models whose “quality or diversity progressively decrease.” Sampling bias — poor representation of the real world — causes a model’s diversity to worsen after a few generations of training, according to the researchers. Keyes sees additional risks in complex models such as OpenAI’s o1, which he thinks could produce harder-to-spot hallucinations in their synthetic data. These, in turn, could reduce the accuracy of models trained on the data — especially if the hallucinations’ sources aren’t easy to identify.


Federal Privacy Is Inevitable in The US (Prepare Now)

The writing’s on the wall for federal privacy. It’s simply not tenable for almost half the states having varying privacy thresholds and the other half with nothing. Our interconnected business and digital ecosystems need certainty and consistency across the country. Congress can and should stand up for American privacy. The good news? Recent history shows that sweeping reforms are possible. From the CHIPS and Science Act to major pandemic stimulus, lawmakers have shown their ability to meet moments with big regulations. While states deserve credit for filling the privacy void, federal action must follow. For now, there’s no time to waste. Enterprises that build privacy-ready operations today will be better positioned to thrive under future regulations, maintain customer trust, and turn compliance into a competitive advantage. On the other hand, slow-to-move companies risk regulatory penalties and loss of customer confidence in an increasingly privacy-conscious marketplace. Future-forward organizations recognize that investing in privacy isn’t just about compliance; it’s about building a sustainable competitive advantage in the data-driven economy. The choice is clear: invest in privacy now or play catch-up when federal mandates arrive.


AI use cases are going to get even bigger in 2025

Few sectors stand to gain more from AI advancements than defense. “We are witnessing a surge in applications like autonomous drone swarms, electronic spectrum awareness, and real-time battlefield space management, where AI, edge computing, and sensor technologies are integrated to enable faster responses and enhanced precision,” says Meir Friedland, CEO at RF spectrum intelligence company Sensorz. ... “AI is transforming genome sequencing, enabling faster and more accurate analyses of genetic data,” Khalfan Belhoul, CEO at the Dubai Future Foundation, tells Fast Company. “Already, the largest genome banks in the U.K. and the UAE each have over half a million samples, but soon, one genome bank will surpass this with a million samples.” But what does this mean? “It means we are entering an era where healthcare can truly become personalized, where we can anticipate and prevent certain diseases before they even develop,” Belhoul says. ... The potential for AI extends far beyond the use cases dominating today’s headlines. As Friedland notes, “AI’s future lies in multi-domain coordination, edge computing, and autonomous systems.” These advancements are already reshaping industries like manufacturing, agriculture, and finance.


2025 Will Be the Year That AI Agents Transform Crypto

The value of AI agents lies not just in their utility but in their potential to scale human capabilities. Agents are no longer just tools — they are emerging as participants in the on-chain economy, driving innovation across finance, gaming and decentralized social platforms. With protocols such as Virtuals and open-source frameworks like ELIZA, it’s becoming increasingly simple for developers to build, deploy and iterate AI agents that serve an increasingly diverse set of use cases. ... Unlike the core foundational AI models that are developed behind the walled gardens of OpenAI and Anthropic, AI agents are being innovated in the trenches of the crypto world. And for good reason. Blockchains provide the ideal infrastructure as they offer permissionless and frictionless financial rails, enabling agents to seed wallets, transact and send funds autonomously — tasks that would be unfeasible using traditional financial systems. In addition, the open-source nature of crypto allows developers to leverage existing frameworks to launch and iterate on agents faster than ever before. With more no-code platforms like Top Hat gaining traction, it’s only getting easier for anyone to be able to launch an agent in minutes. 


Unpacking OpenAI's Latest Approach to Make AI Safer

OpenAI said it used an internal reasoning model to generate synthetic examples of chain-of-thought responses, each referencing specific elements of the company's safety policy. Another model, referred to as the "judge," evaluated these examples to meet quality standards. The approach looks to address the challenges of scalability and consistency, OpenAI said. Human-labeled datasets are labor-intensive and prone to variability, but properly vetted synthetic data can theoretically offer a scalable solution with uniform quality. The method can potentially optimize training and reduce the latency and computational overhead associated with the models reading lengthy safety documents during inference. OpenAI acknowledged that aligning AI models with human safety values remains a challenge. Users continue to develop jailbreak techniques to bypass safety restrictions, such as framing malicious requests in deceptive or emotionally charged contexts. The o3 series models scored better than its peers Gemini 1.5 Flash, GPT-4o and Claude 3.5 Sonnet on the Pareto benchmark, which measures a model's ability to resist common jailbreak strategies. But the results may be of little consequence, as adversarial attacks evolve alongside improvements in model defenses.


The yellow brick road to agentic AI

Many believe this AI era is the most profound we’ve ever seen in tech. We agree and liken it to mobile’s role in driving on-premises workloads to the cloud and disrupting information technology. But we see this as even more impactful. But for AI agents to work we have to reinvent the software stack and break down 50 years of silo building. The emergence of data lakehouses is not the answer as they are just a bigger siloed asset. Rather, software as a service as we know it will be reimagined. Two prominent chief executives agree. At Amazon Web Services Inc.’s recent AWS re:Invent conference, we sat down with Amazon.com Inc. CEO Andy Jassy. ... There is a clear business imperative behind this shift. We believe companies will differentiate themselves by aligning end-to-end operations with a unified set of plans — from three-year strategic assumptions about demand to real-time, minute-by-minute decisions, such as how to pick, pack and ship individual orders to meet long-term goals. The function of management has always involved planning and resource allocation across various timescales and geographies, but previously there was no software capable of executing on these plans seamlessly across every time horizon.


The AI backlash couldn’t have come at a better time

Developers, engineers, operations personnel, enterprise architects, IT managers, and others need AI to be as boring for them as it has become for consumers. They need it not to be a “thing,” but rather something that is managed and integrated seamlessly into — and supported by — the infrastructure stack and the tools they use to do their jobs. They don’t want to endlessly hear about AI; they just want AI to seamlessly work for them so it just works for customers. ... The models themselves are also, rightly, growing more mainstream. A year ago they were anything but, with talk of potentially gazillions of parameters and fears about the legal, privacy, financial, and even environmental challenges such a data abyss would create. Those LLLMs are still out there, and still growing, but many organizations are looking for their models to be far less extreme. They don’t need (or want) a model that includes everything anyone ever learned about anything; rather, they need models that are fine-tuned with data that is relevant to the business, that don’t necessarily require state-of-the-art GPUs, and that promote transparency and trust. As Matt Hicks, CEO of Red Hat, put it, “Small models unlock adoption.”


Systems Thinking in Leading Transformation for the Future

The first step is aligning your internal goals with your external insights. Leaders must articulate a clear vision that ties the organization's purpose to broader societal and industry trends. For Nooyi and PepsiCo, that meant “starting from the outside.” Nooyi tasked her senior leaders with identifying external factors that would likely impact the company. She said, “They pointed to several megatrends … including a preoccupation with health and wellness, scarcity of water and other natural resources, constraints created by global climate change … and a talent market characterized by shortages of key people.” ... Systems thinking involves understanding the interdependencies within and outside an organization. For example, if you are embarking on any transformation project, you’ll likely need to explore new partnerships with suppliers and regional authorities and regulators. ... Using frameworks like OKRs (Objectives and Key Results), you can evaluate how each initiative within your transformation program contributes to the overarching objective. For example, a laudable main aim such as a commitment to environmental sustainability would likely involve numerous associated projects: for example, water conservation, waste reduction, and reduced carbon footprint.


The 2024 cyberwar playbook: Tricks used by nation-state actors

While nation-state actors loved zero days for swift break-ins, phishing remained a sly plan B. It let them craft sneaky schemes to worm into systems, proving that 2024 was the year of both bold strikes and artful cons. Russian nation-state actors leaned heavily on phishing in 2024, with other APTs, like Iranian and Pakistani groups, dabbling in the tactic as well. The following are some of the standout campaigns from 2024 where phishing was the go-to for initial access. ... While credential harvesting through malware delivered via phishing was fairly common, nation-state actors rarely resorted to scavenging credentials from hack forums or drop sites as a primary tactic. When asked, Hughes noted, “I’m not familiar with this being the primary MO by the APTs, who instead are targeting devices, products and vendors with vulnerabilities and misconfigurations, but once inside, they do compromise credentials and use those to pivot, move laterally, persist in environments and more.” ... These actors weren’t always about flashy, custom malware. Quite often, they used legit tools like PowerShell, rootkits, RDP, and other off-the-shelf system features to sneak in, stay undetected, and set up long-term access. This made their attacks stealthy, persistent, and ready for future moves. 


Generative AI is now a must-have tool for technology professionals

As part of this trend, "we are witnessing developers shift from writing code to orchestrating AI agents," said Jithin Bhasker, general manager and vice president at ServiceNow. The efficiency gained from gen AI adoption by technologists isn't just about personal productivity, it's urgent "with the projected shortage of half a million developers by 2030 and the need for a billion new apps," he added. ... Still, as gen AI becomes a commonplace tool in technology shops, Berent-Spillson advises caution. "The real game-changer here is speed, but there's a catch," he said. "While AI can dramatically compress cycle time, it will also amplify any existing process constraints. Think of it like adding a supercharger to your car -- if your chassis isn't solid, you're just going to get to the problem faster." Exercise caution "regarding code quality, maintainability, and IP considerations," McDonagh-Smith advises. "While syntactically correct, AI tools have been seen to create code that's logically flawed or inefficient, leading to potential code degradation over time if not reviewed carefully. We should also guard against software sprawl where the ease of creating AI-generated code results in overly complex or unnecessary code that might make projects more difficult to maintain over time."



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - December 24, 2024

Concerns over the security of electronic personal health information intensifies

When entities outside HIPAA’s purview experience breaches, the Federal Trade Commission (FTC) Health Breach Notification Rule applies. However, this dual system creates confusion among stakeholders, who must navigate overlapping jurisdictions. The lack of a unified, comprehensive framework exacerbates the problem, leaving patients uncertain about the security of their health data. Another pressing concern is the cybersecurity of medical devices. Many modern medical devices connect to networks or the internet, increasing their susceptibility to cyberattacks. Hospitals often operate thousands of interconnected devices, making it challenging to monitor and secure every endpoint. Insecure devices not only endanger patient privacy but also jeopardize care delivery. For instance, a compromised infusion pump or defibrillator could have life-threatening consequences. The Food and Drug Administration (FDA) has taken steps to address these vulnerabilities through premarket and post-market cybersecurity guidelines. However, the onus of ensuring device security often falls into a gray area between manufacturers and healthcare providers. 


The rise of “soft” skills: How GenAI is reshaping developer roles

The successful developer in this evolving landscape will be one who can effectively combine technical expertise with strong interpersonal skills. This includes not only the ability to work with AI tools but also the capability to collaborate with both technical and non-technical stakeholders. After all, with less of a need for coders to do the low-level, routine work of software development, more emphasis will be placed on coders’ ability to collaborate with business managers to understand their goals and create technology solutions that will advance them. Additionally, the coding that they’ll be doing will be more complex and high-level, often requiring work with other developers to determine the best way forward. The emphasis on soft skills—including adaptability, communication, and collaboration—has become as crucial as technical proficiency. As the software development field continues to evolve, it’s clear that the future belongs to those who embrace AI as a powerful complement to their skills rather than viewing it as a threat. The coding profession isn’t disappearing—it’s transforming into a role that demands a more comprehensive skill set, combining technical mastery with strong interpersonal capabilities.


Top 10 Cybersecurity Trends to Expect in 2025

Zero-day vulnerabilities are still one of the major threats in cybersecurity. By definition, these faults remain unknown to software vendors and the larger security community, thus leaving systems exposed until a fix can be developed. Attackers are using zero-day exploits frequently and effectively, affecting even major companies, hence the need for proactive measures. Advanced threat actors use zero-day attacks to achieve goals including espionage and financial crimes. ... Integrating regional and local data privacy regulations such as GDPR and CCPA into the cybersecurity strategy is no longer optional. Companies need to look out for regulations that will become legally binding for the first time in 2025, such as the EU's AI Act. In 2025, regulators will continue to impose stricter guidelines related to data encryption and incident reporting, including in the realm of AI, showing rising concerns about online data misuse. Decentralized security models, such as blockchain, are being considered by some companies to reduce single points of failure. Such systems offer enhanced transparency to users and allow them much more control over their data. ... Verifying user identities has become more challenging as browsers enforce stricter privacy controls and attackers develop more sophisticated bots. 


Navigating AI in Aviation: A Roadmap for Risk and Security Management Professionals

The Roadmap for Artificial Intelligence Safety Assurance, recently published by FAA, recognizes the potential of AI on aviation and emphasizes the need for safety assurance, industry collaboration and incremental implementation. This roadmap, combined with other international frameworks, offers a global framework for managing AI risks in aviation. ... While AI demonstrates the potential for enhanced operational efficiency, predictive maintenance and even autonomous flight, these benefits come with significant security and compliance risks. ... Differentiating between learned AI (static) and learning AI (adaptive) poses a significant challenge in AI risk management. The FAA roadmap calls for continuous monitoring and assurance, especially for learning AI, echoing the need for dynamic risk assessment protocols like those recommended in NIST-AI-600-1 for managing generative AI models. ... Incorporating AI in aviation is far from straightforward, and due to human safety concerns, it involves navigating a constantly evolving landscape of risks and at times overbearing regulatory requirements. For risk and security professionals, the key task is to align AI technologies with operational safety and evolving regulatory requirements.


The Urgent Need for Data Minimization Standards

On one side of the spectrum is the redaction of direct identifiers such as names, or payment card information such as credit card numbers. On the other side of the spectrum lies anonymization, where re-identification of individuals is extremely unlikely. Within the spectrum, we also find pseudonymization, which, depending on the jurisdiction, often means something like reversible de-identification Many organizations are keen to anonymize their data because, if anonymization is achieved, the data falls outside of the scope of data protection laws as they are no longer considered personal information. ... We hold that the claim that data anonymization is impossible is based on a lack of clarity around what is required for anonymization, with organizations often either wittingly or unwittingly misusing the term for what is actually a redaction of direct identifiers. Furthermore, another common claim is that data minimization is in irresolvable tension with the use of data at a large scale in the machine learning context. This claim is not only based on a lack of clarity around data minimization but also a lack of understanding around the extremely valuable data that often surrounds identifiable information, such as data about products, conversation flows, document topics, and more.


How CISOs can make smarter risk decisions

Bot detection works by recognizing markers of bad bots, including requests originating from malicious domains and patterns of behavior exhibited. Establishing a baseline of normal human web activity and recognizing anomalous behavior from incoming traffic is at the core of effective bot detection.  ... Unsurprisingly, for businesses focused on managing users’ money, account takeover and carding attacks are common in the financial industry. In these instances, cybercriminals try to break into accounts and steal information from the payments page. As such, the financial industry has been an early adopter of cybersecurity protocols and tools to ensure a fully comprehensive and well-funded security program, while the travel and hospitality industries have not yet made that pivot in the same way. ... A good CISO makes balanced risk decisions. A bad CISO gets in the way of helping the company innovate. The combination of industry best practices and regulation forcing the adoption of robust security tooling and methodology pushes companies to create a strong baseline to build in effective protections. However, CISOs must evaluate carefully what assets they choose to put maximum security measures behind. If you argue that everything needs that high level of security, you become the CISO who cried wolf


Developers Are Key to Stopping Rising API Security Threat

Developers and security teams typically share responsibility for ensuring APIs are secure. “While the security team is ultimately responsible for the overall security posture of an organization, developers play a key role in building and managing secure APIs,” Whaley said. “They need to write secure code and implement security measures during the development phase, such as input validation, authentication, encryption and access control.” The security team defines and enforces security policies, he said. They’re also responsible for establishing governance frameworks and managing tools to monitor, detect and respond to threats. ... Developers also play an important role in remediating API security problems, he said. Their job is to implement fixes and ensure that vulnerabilities are properly addressed. emediating an incident can include fixing vulnerabilities, deploying patches and addressing any misconfigurations. But it can also sometimes mean hiring external help in the form of security consultants, investing in new security tools and covering any legal and compliance fees, he said. “Additionally, there are intangible factors to consider, like damage to brand reputation and loss of customer confidence, which can have a big impact even if they are harder to quantify,” Whaley added.


Companies Race to Use AI Security Against AI-Driven Threats

First, securing AI by design is crucial, as our customers increasingly rely on AI in their ecosystems. As a cybersecurity solution provider, our objective is to ensure our customers are protected when using new technologies. The second vector involves combating adversaries who use AI to launch attacks. The rate of these attacks is exponentially faster and more sophisticated than ever before. To counter this, we must utilize AI to protect against AI-driven attacks. The third vector focuses on how AI can benefit security practitioners. By simplifying complex data analysis and enhancing product interactions, AI can significantly improve the efficiency and effectiveness of security operations. Solutions such as AI Access Security, which provides visibility into AI usage within enterprises and ensures secure AI applications have seen development at 100 customers already benefiting from our AI security solutions, we see a clear shift in maturity levels. ... Autonomous SOCs are becoming a reality, driven by two key factors. First, adversaries are evolving at a pace that outstrips our ability to scale human resources. Second, there's a shortage of qualified cybersecurity talent. These dual pressures on both supply and demand - necessitate technological intervention. 


Overcoming modern observability challenges

Observability is crucial for quickly detecting issues and taking corrective actions to ensure that application performance does not negatively impact customer experience. With millions of transactions occurring every second, relying on traditional logic, predefined rules, and human intervention is no longer sufficient. According to a 2023 Gartner report, applied observability has emerged as one of the top 10 strategic technology trends, underscoring the increasing need for using AI to make smarter, more automated solutions to stay competitive​ and optimize business operations in real time. Today’s observability solutions must go beyond static monitoring by incorporating AI and machine learning to detect patterns, trends, and anomalies. By automatically identifying outliers and emerging issues, AI-driven systems reduce the mean time to detect (MTTD) and mean time to resolve (MTTR), driving efficiency and helping teams address potential problems before they affect end-users. ... Organizations need an observability solution that is comprehensive, cost-effective, and intelligent. The Kloudfuse observability platform is designed to monitor modern cloud-native workloads while optimizing costs, offering insights into model performance and mitigating risks. 


Managing Software Engineering Teams of Artificial Intelligence Developers

Regardless of its industry, every organization has an AI solution, is working on AI integration, or has a plan for it in its roadmap. While developers are being trained in the various technological skills needed for development, senior leadership must focus on strategies to integrate and align these efforts with the broader organization. ... Investing in AI alone will not guarantee success for the company. Avoid making investment decisions solely based on the Fear of Missing Out. For the business to thrive in the long run, it must focus on value creation through AI integration. Follow standard processes and conduct thorough due diligence to identify where AI can effectively drive value for your product. Collaborate closely with the product, business, and engineering teams to define the scope of work and develop a strategic vision that ensures alignment within the team. It is also crucial to achieve stakeholder alignment, especially given the complexity of the projects, while setting realistic expectations. ... As an engineering leader, invest in the right skills required for the project. Empower the team to make the best decisions. Building strong expertise in the teams and providing learning opportunities for the team by allowing them to attend learning sessions, conferences, hackathons, etc.



Quote for the day:

“It's failure that gives you the proper perspective on success.” -- Ellen DeGeneres

Daily Tech Digest - December 23, 2024

‘Orgs need to be ready’: AI risks and rewards for cybersecurity in 2025

“In 2025, we expect to see more AI-driven cyberthreats designed to evade detection, including more advanced evasion techniques bypassing endpoint detection and response (EDR), known as EDR killers, and traditional defences,” Khalid argues. “Attackers may use legitimate applications like PowerShell and remote access tools to deploy ransomware, making detection harder for standard security solutions.” On a more frightening note, Michael Adjei, director of systems engineering at Illumio, believes that AI will offer somewhat of a field day for social engineers, who will trick people into actually creating breaches themselves: “Ordinary users will, in effect, become unwitting participants in mass attacks in 2025. ... “With greater adoption of AI will come increased cyberthreats, and security teams need to remain nimble, confident and knowledgeable.” Similarly, Britton argues that teams “will need to undergo a dedicated effort around understanding how [AI] can deliver results”. “To do this, businesses should start by identifying which parts of their workflows are highly manual, which can help them determine how AI can be overlaid to improve efficiency. Key to this will be determining what success looks like. Is it better efficiency? Reduced cost?”


Will we ever trust robots?

The chief argument for robots with human characteristics is a functional one: Our homes and workplaces were built by and for humans, so a robot with a humanlike form will navigate them more easily. But Hoffman believes there’s another reason: “Through this kind of humanoid design, we are selling a story about this robot that it is in some way equivalent to us or to the things that we can do.” In other words, build a robot that looks like a human, and people will assume it’s as capable as one. In designing Alfie’s physical appearance, Prosper has borrowed some aspects of typical humanoid design but rejected others. Alfie has wheels instead of legs, for example, as bipedal robots are currently less stable in home environments, but he does have arms and a head. The robot will be built on a vertical column that resembles a torso; his specific height and weight are not yet public. He will have two emergency stop buttons. Nothing about Alfie’s design will attempt to obscure the fact that he is a robot, Lewis says. “The antithesis [of trustworthiness] would be designing a robot that’s intended to emulate a human … and its measure of success is based on how well it has deceived you,” he told me. “Like, ‘Wow, I was talking to that thing for five minutes and I didn’t realize it’s a robot.’ That, to me, is dishonest.”


My Personal Reflection on DevOps in 2024 and Looking Ahead to 2025

As we move into 2025, the big stories that dominated 2024 will continue to evolve. We can expect AI—particularly generative AI—to become even more deeply ingrained in the DevOps toolchain. Prompt engineering for AI models will likely emerge as a specialized skill, just as writing Docker files was a skill set that distinguished DevOps engineers a decade ago. Agentic AI will become the norm with teams of agents taking on the tasks that lower level workers once performed. On the policy side, escalating regulatory demands will push enterprises to adopt more stringent compliance frameworks, integrating AI-driven compliance-as-code tools into their pipelines. Platform engineering will mature, focusing on standardization and the creation of “golden paths” that offer best practices out of the box. We may also see a consolidation of DevOps tool vendors as the market seeks integrated, end-to-end platforms over patchwork solutions. The focus will be on usability, quality, security and efficiency—attributes that can only be realized through cohesive ecosystems rather than fragmented toolchains. Sustainability will also factor into 2025’s narrative. As environmental concerns shape global economic policies and public sentiment, DevOps teams will take resource optimization more seriously. 


From Invisible UX to AI Governance: Kanchan Ray, CTO, Nagarro Shares his Vision for a Connected Future

Vision and data derived from videos have become integral to numerous industries, with machine vision playing a crucial role in automating business processes. For instance, automatic inventory management, often supported by robots, is transitioning from experimental to mainstream. Machine vision also enhances security and safety by replacing human monitoring with machines that operate around the clock, offering greater accuracy at a lower cost. On the consumer front, virtual try-ons and AI-assisted mirrors have become standard features in reputable retail outlets, both in physical stores and online platforms. ... Traditional boundaries of security, which once focused on standard data security, governance, and IT protocols, are now fluid and dynamic. The integration of AI, data analytics, and machine learning has created diverse contexts for output consumption, resulting in new business operations around model simulations and decision-making related to model pipelines. These operations include processes like model publishing, hyperparameter observability, and auditing model reasoning, all of which push the boundaries of AI responsibility.


If your AI-generated code becomes faulty, who faces the most liability exposure?

None of the lawyers, though, discussed who is at fault if the code generated by an AI results in some catastrophic outcome. For example: The company delivering a product shares some responsibility for, say, choosing a library that has known deficiencies. If a product ships using a library that has known exploits and that product causes an incident that results in tangible harm, who owns that failure? The product maker, the library coder, or the company that chose the product? Usually, it's all three. ... Now add AI code into the mix. Clearly, most of the responsibility falls on the shoulders of the coder who chooses to use code generated by an AI. After all, it's common knowledge that the code may not work and needs to be thoroughly tested. In a comprehensive lawsuit, will claimants also go after the companies that produce the AIs and even the organizations from which content was taken to train those AIs (even if done without permission)? As every attorney has told me, there is very little case law thus far. We won't really know the answers until something goes wrong, parties wind up in court, and it's adjudicated thoroughly. We're in uncharted waters here. 


5 Signs You’ve Built a Secretly Bad Architecture (And How to Fix It)

Dependencies are the hidden traps of software architecture. When your system is littered with them — whether they’re external libraries, tightly coupled modules, or interdependent microservices — it creates a tangled web that’s hard to navigate. They make the system difficult to debug locally. Every change risks breaking something else. Deployments take more time, troubleshooting takes longer, and cascading failures are a real threat. The result? Your team spends more time toiling and less time innovating. ... Reducing dependencies doesn’t mean eliminating them entirely or splitting your system into nanoservices. Overcorrecting by creating tiny, hyper-granular services might seem like a solution, but it often leads to even greater complexity. In this scenario, you’ll find yourself managing dozens — or even hundreds — of moving parts, each requiring its own maintenance, monitoring, and communication overhead. Instead, aim for balance. Establish boundaries for your microservices that promote cohesion, avoiding unnecessary fragmentation. Strive for an architecture where services interact efficiently but aren’t overly reliant on each other, which increases the flexibility and resilience of your system.


The 4 key aspects of a successful data strategy

Without a data strategy to structure various efforts, the value added from data in any organization of a certain size or complexity falls far short of the possibilities. In such cases, data is only used locally or aggregated along relatively rigid paths. The result? The company’s agility in terms of necessary changes remains inhibited. In the absence of such a strategy, technical concepts and architectures can hardly increase this value either. A well-thought-out data strategy can be formulated in various ways. It encompasses several different facets, such as availability, searchability, security, protection of personal data, cost control, etc. However, four key aspects that form the basis for a data strategy can be identified from a variety of data-related projects: identity, bitemporality, networking and federalism. ... A data strategy also determines how companies encode the knowledge about their products, services, processes and business models. This makes solutions possible that also allow for automated decision support. To sell glasses online, a lot of specialized optician knowledge must be encoded so that the customer does not make serious mistakes when configuring their glasses. The optimal size of the progressive lenses depends, among other things, on the visual acuity and the lens geometry. 


Maximizing the impact of cybercrime intelligence on business resilience

An intelligence capability is only as effective as its coverage of the adversary. A robust program ensures historical coverage for context, near-real-time coverage for timely responses to immediate threats, and depth of coverage for sufficient understanding. Cybercrime intelligence coverage encompasses both human and technical data. Valuable sources of information include any platforms where cybercriminals gather to communicate, coordinate, or trade, such as social networks, chatrooms, forums and direct one-on-one interactions. Technical coverage requires visibility into the tools used by adversaries. This coverage can be obtained through programmatic malware emulation across the full spectrum of malware families deployed by cybercriminals, ensuring comprehensive insights into their activities in a timely and ongoing manner. ... Adversary Intelligence is produced from a focused collection, analysis and exploitation capability and curated from where threat actors collaborate, communicate and plan cyber attacks. Obtaining and utilizing this Intelligence provides proactive and groundbreaking insights into the methodology of top-tier cybercriminals – target selection, assets and tools used, associates and other enablers that support them.


Large language overkill: How SLMs can beat their bigger, resource-intensive cousins

LLMs are incredibly powerful, yet they are also known for sometimes “losing the plot,” or offering outputs that veer off course due to their generalist training and massive data sets. That tendency is made more problematic by the fact that OpenAI’s ChatGPT and other LLMs are essentially “black boxes” that don’t reveal how they arrive at an answer. This black box problem is going to become a bigger issue going forward, particularly for companies and business-critical applications where accuracy, consistency and compliance are paramount. ... Fortunately, SLMs are better suited to address many of the limitations of LLMs. Rather than being designed for general-purpose tasks, SLMs are developed with a narrower focus and trained on domain-specific data. This specificity allows them to handle nuanced language requirements in areas where precision is paramount. Rather than relying on vast, heterogeneous datasets, SLMs are trained on targeted information, giving them the contextual intelligence to deliver more consistent, predictable and relevant responses. This offers several advantages. First, they are more explainable, making it easier to understand the source and rationale behind their outputs. This is critical in regulated industries where decisions need to be traced back to a source.


Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother

Even though AI brings great productivity, Shadow AI introduces different risks ... Studies show employees are frequently sharing legal documents, HR data, source code, financial statements and other sensitive information with public AI applications. AI tools can inadvertently expose this sensitive data to the public, leading to data breaches, reputational damage and privacy concerns. ... Feeding data into public platforms means that organizations have very little control over how their data is managed, stored or shared, with little knowledge of who has access to this data and how it will be used in the future. This can result in non-compliance with industry and privacy regulations, potentially leading to fines and legal complications. ... Third-party AI tools could have built-in vulnerabilities that a threat actor could exploit to gain access to the network. These tools can lack security standards compared to an organization’s internal security systems. Shadow AI can also introduce new attack vectors making it easier for malicious actors to exploit weaknesses. ... Without proper governance or oversight, AI models can spit out biased, incomplete or flawed outputs. Such biased and inaccurate results can bring harm to organizations. 



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel