Daily Tech Digest - January 03, 2025

Tech predictions 2025 – what could be in store next year?

In 2025, we will hear of numerous cases where threat actors trick a corporate Gen AI solution into giving up sensitive information and causing high-profile data breaches. Many enterprises are using Gen AI to build customer-facing chatbots, in order to aid everything from bookings to customer service. Indeed, in order to be useful, LLMs must ultimately be granted access to information and systems in order to answer questions and take actions that a human would otherwise have been tasked with. As with any new technology, we will witness numerous corporations grant LLMs access to huge amounts of potentially sensitive data, without appropriate security considerations. ... The future of work won’t be a binary choice between humans or machines.  It will be an “and.” AI-powered humanoids will form a part of the future workforce, and we will likely see the first instance happen next year. This will force companies to completely reimagine their workplace dynamics – and the technology that powers them. ... At the same time, organisations must ensure their security postures keep pace. Not only to ensure the data being processed by humanoids is kept safe, but also to keep the humanoids safeguarded from hacking and threatening tweaks to their software and commands. 


7 Private Cloud Trends to Watch in 2025

A lot of organizations are repatriating workloads to private cloud from public cloud, but Rick Clark, global head of cloud advisory at digital transformation solutions company UST warns they aren’t giving it much forethought, like they did earlier when migrating to public clouds. As a result, they’re not getting the ROI they hope for. “We haven’t still figured out what is appropriate for workloads. I’m seeing companies wanting to move back the percentage of their workload to reduce cost without really understanding what the value is so they’re devaluing what they're doing,” says Clark. ... Artificial intelligence and automation are also set to play a crucial role in private cloud management. They enable businesses to handle growing complexity by automating resource optimization, enhancing threat detection, and managing costs. “The ongoing talent shortage in cybersecurity makes [AI and automation] especially valuable. By reducing manual workloads, AI allows companies to do more with fewer resources,” says Trevor Horwitz, CISO and founder at cybersecurity, consulting, and compliance services provider. ... Security affects all aspects of a cloud journey, including the calculus of when and where to use private cloud environments. One significant challenge is making sure that all layers of the stack have detection and response capability.


Agility in Action: Elevating Safety through Facial Recognition

Facial Recognition Technology (FRT) stands out as a leading solution to these problems, protecting not only the physical boundaries but also the organization’s overall integrity. Through precise identity verification and user validation, FRT considerably lowers the possibility of unauthorized access. Organizations, irrespective of size, can benefit from this technology, which offers improved security and operational effectiveness. ... A comprehensive physical security program with interconnected elements serves as the backbone of any security infrastructure. Regulating who can enter or exit a facility is vital. Effective systems include traditional mechanical methods, such as locks and keys, as well as electronic solutions like RFID cards. By using these methods, only authorized persons are able to enter. Nonetheless, a technological solution that works with many Original Equipment Manufacturers (OEMs) is required to successfully counter today’s dangers. In addition to guaranteeing general user convenience, this technology should give top priority to data privacy and safety compliance.
Effective physical security is built on deterring unauthorized entry and identifying people of interest. This can include anything from physical security personnel to surveillance and access control systems.


Strategies for Managing Data Debt in Growing Organizations

Not all data debt is created equal. Growing organizations experiencing data sprawl at an expanding rate must conduct a thorough impact assessment to determine which aspects of their data debt are most harmful to operational efficiency and strategic initiatives. An effective approach involves quantifying the potential risks associated with each type of debt – such as compliance violations or lost customer insights – and calculating the opportunity cost of maintaining versus mitigating them. ... A core approach to managing data debt is to establish strong data governance practices that address inconsistencies and fragmentation. Before anything else, you must establish an adequate access control system and ensure its imperviousness. Next, you must think about implementing robust validation mechanisms that will help prevent further debt accumulation. Data governance frameworks provide a foundation for minimizing ad hoc fixes, which are the primary drivers of data debt. ... An architectural shift that facilitates scalability can help avoid the bottlenecks that arise when data outgrows its infrastructure. Technologies like cloud platforms offer scalability without heavy up-front investments, allowing organizations to expand their capacity in line with their growth.


Secure by design vs by default – which software development concept is better?

The challenge here is that, while from a security perspective we may agree that it is wise, it could inevitably put developers and vendors at a competitive disadvantage. Those who don’t prioritize secure-by-design can get features, functionality, and products out to market faster, leading to potentially more market share, revenue, customer attraction/retention, and more. Additionally, many vendors are venture-capital backed, which comes with expectations of return on investment — and the reality that cyber is just one of many risks their business is facing. They must maintain market share, hit revenue targets, deliver customer satisfaction, raise brand awareness/exposure, and achieve the most advantageous business outcomes. ... Secure-by-default development focuses on ensuring that software components arrive at the end-user with all security features and functions fully implemented, with the goal of providing maximum security right out of the box. Most cyber professionals have experienced having to apply CIS Benchmarks, DISA STIGs, vendor guidance and so on to harden a new product or software to ensure we reduce its attack surface. Secure-by-default flips that paradigm on its head so that products arrive hardened and require customers to roll back or loosen the hardened configurations to tailor them to their needs.


The modern CISO is a cornerstone of organizational success

Historically, CISOs focused on technical responsibilities, including managing firewalls, monitoring networks, and responding to breaches. Today, they are integral to the C-suite, contributing to decisions that align security initiatives with organizational goals. This shift in responsibilities reflects the growing realization that security is not just an IT function but a critical enabler of business goals, customer trust, and competitive advantage. CISOs are increasingly embedded in the strategic planning process, ensuring that cybersecurity initiatives support overall business goals rather than operate as standalone activities. ... One of the most critical aspects of the modern CISO role is integrating security into operational processes without disrupting productivity. This involves working closely with operations teams to design workflows prioritizing efficiency and security. This aspect of their responsibility ensures that security does not become a bottleneck for business operations but enhances operational resilience, efficiency, and productivity. ... The CISO of tomorrow will redefine success by aligning cybersecurity with business objectives, fostering a culture of shared responsibility, and driving resilience in the face of emerging risks like AI-driven attacks, quantum threats, and global regulatory pressures.


Key Infrastructure Modernization Trends for Enterprises

Cloud providers and data centers need advanced cooling technologies, including rear-door heat exchange, immersion and direct-to-chip systems. Sustainable power sources such as solar and wind must supplement traditional energy resources. These infrastructure changes will support new chip generations, increased rack densities and expanding AI requirements while enabling edge computing use cases. "Liquid cooling has evolved to move from cooling the broader data center environment to getting closer and even within the infrastructure," Hewitt said. "Liquid-cooled infrastructure remains niche today in terms of use cases but will become more predominant as next generations of GPUs and CPUs increase in power consumption and heat production." ... Document existing business processes and workflows to improve visibility and identify gaps suitable for AI implementation. Organizations must organize data for AI tools that can bring in improvements, keep track of where the data resides to organize it for AI use, build internal guidelines for training and testing AI-driven workflows, and create robust controls for processes that incorporate AI agents.


Being Functionless: How to Develop a Serverless Mindset to Write Less Code!

As the adoption of FaaS increased, cloud providers added a variety of language runtimes to cater to different computational needs, skills, etc., offering something for most programmers. Language runtimes such as Java, .NET, Node.js, Python, Ruby, Go, etc., are the most popular and widely adopted. However, this also brings some challenges to organizations adopting serverless technology. More than technology challenges, these are mindset challenges for engineers. ... Sustainability is a crucial aspect of modern cloud operation. Consuming renewable energy, reducing carbon footprint, and achieving green energy targets are top priorities for cloud providers. Cloud providers invest in efficient power and cooling technologies and operate an efficient server population to achieve higher utilization. For this reason, AWS recommends using managed services for efficient cloud operation, as part of their Well-Architected Framework best practices for sustainability. ... For engineers new to serverless, equipping their minds to its needs can be challenging. Hence, you hear about the serverless mindset as a prerequisite to adopting serverless. This is because working with serverless requires a new way of thinking, developing, and operating applications in the cloud. 


Unlocking opportunities for growth with sovereign cloud

Although there is no standard definition of what constitutes a “sovereign cloud,” there is a general understanding that it must ensure sovereignty at three fundamental levels: data, operations, and infrastructure. Sovereign cloud solutions, therefore, have highly demanding requirements when it comes to digital security and the protection of sensitive data, from technical, operational, and legal perspectives. The sovereign cloud concept also opens up avenues for competition and innovation, particularly among local cloud service providers within the UK. In a recent PwC survey, 78% of UK business leaders said they have adopted cloud in most or all areas of their organisations. However, many of these cloud providers operate and function outside of the country, usually across the pond. The development of sovereign cloud offerings provides the perfect push for UK cloud service providers to increase their market share, providing local tools to power local innovation. For a large-scale, accessible, and competitive sovereign cloud ecosystem to emerge, a combination of certain factors is essential. Firstly, partnerships are crucial. Developing local sovereign cloud solutions that offer the same benefits and ease of use as large hyperscalers is a significant challenge.


The Tipping Point: India's Data Center Revolution

"Data explosion and data localization are paving the way for a data center revolution in India. The low data tariff plans, access to affordable smartphones, adoption of new technologies and growing user base of social media, e-commerce, gaming and OTT platforms are some of the key triggers for data explosion. Also, AI-led demand, which is expected to increase multi-fold in the next 3-5 years, presents significant opportunities. This, coupled with favourable regulatory policies from the Central and State governments, the draft Digital Personal Data Protection Bill, and the infrastructure status are supporting the growth prospects," said Anupama Reddy, Vice President and Co-Group Head - Corporate Ratings, ICRA. ... The high-octane data center industry comes with its own set of challenges. The data center industry faces high operational costs alongside challenges in scalability, cybersecurity, sustainability, and skilled workforce. Power and cooling are major cost drivers, with data centers consuming 1-1.5 per cent of global electricity. Advanced cooling solutions and energy-efficient hardware can help reduce energy costs while supporting environmental goals.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree

Daily Tech Digest - January 02, 2025

7 Practices to Bolster Cloud Security and Keep Attackers at Bay

AI tools can facilitate quicker threat detection, investigation, and response. All healthy cloud security postures should utilize ML-based user and entity behavior analytics (UEBA) tools. Such tools effectively identify anomalous behavior across the network, while facilitating rapid investigation of potential threats and automating responses to mitigate and remediate attacks. Ideally, security professionals want to find vulnerabilities before an attack occurs, and such AI tools can help to do just that. ... When a threat occurs in the cloud, it can sometimes be difficult to assess the potential impact across a distributed or multitenant surface. By utilizing a centralized platform, security personnel have access to a response center that can automate workflows by orchestrating with different cloud applications, which in turn reduces the mean time to resolve (MTTR) incidents and threats. ... By correlating access and security logs from cloud applications, security personnel can identify attempts at data exfiltration from the cloud. As a quick example, if a SOC professional is investigating potential customer data exfiltration from a cloud-based CRM tool, he or she would want to correlate the logs of that CRM tool with the logs of other cloud applications, such as email or team communication tools. 


6 AI-Related Security Trends to Watch in 2025

As more organizations work to embed AI capabilities into their software, expect to see DevSecOps, DataOps, and ModelOps — or the practice of managing and monitoring AI models in production — converge into a broader, all-encompassing xOps management approach, Holt says. The push to AI-enabled software is increasingly blurring the lines between traditional declarative apps that follow predefined rules to achieve specific outcomes, and LLMs and GenAI apps that dynamically generate responses based on patterns learned from training data sets, Holt says. ... The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. ... The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. ... "If unchecked, this raises serious questions and concerns about data loss prevention as well as compliance concerns as new regulations like the EU AI Act start to take effect," she says. 


Working in Cyber Threat Intelligence (CTI)

“The analysis of an adversary’s intent, opportunity, and capability to do harm is known as cyber threat intelligence.” It’s not just about finding some IOCs and sending them to the SOC. It’s about providing context about adversary activity for other security teams to help prioritize cyber defense efforts. While there are more steps than this, in short we collect intrusion data and analyze it, looking for correlations and trends to observed malicious activity. With that analyzed activity and trends, we can provide actionable insights into malicious activity to keep defenders focused only on the most relevant. ... Aside from everything in the “What CTI Isn’t” section, the biggest challenge in CTI is that it’s next to impossible to get decent intel requirements. “Just get us intel” isn’t a thing. We need information to give relevant information. What strategic initiatives, products, technologies, partnerships, etc. are of particular interest to the leadership? What are all of your countries of operation? What are considered the most critical assets? How would a threat actor achieving their objectives impede the organization’s mission? It unfortunately is an ongoing problem that many CTI analysts and CTI management struggle with. This often leads to intel analysts winging it.


What’s Ahead in Generative AI in 2025?

In the coming year, prompt engineering will continue its rapid maturation into a substantial body of proven practices for eliciting the correct output from LLMs and other foundation models. Within generative AI development tool sets, embedding libraries will become an essential component for developers to build increasingly sophisticated similarity searches that span a diverse range of data modalities. The recent TDWI survey on enterprise AI readiness shows that 28% of organizations already use or are deploying vector databases to store vector embeddings for use with AI models, while 32% plan to adopt those databases in the next few years. In addition, generative AI developers in 2025 will have access to a growing range of tools for no-code development of “agentic” applications that provide autonomous LLM-driven copilot, chatbot, and other functionality and that can be orchestrated over more complex process environments. ... Developers will have access in 2025 to a growing range of sophisticated models and data for building, training, and optimizing generative AI applications—including both commercial and open-source models. The recent TDWI survey on data and analytics trends showed that around 25% of enterprises are experimenting with private or public generative AI models, while 17% are building generative AI apps that use company data with pretrained models. 


This Is The Phrase That Instantly Damages Your Leadership Integrity

There are few phrases that have the ability to instantly cause hesitation like the phrase “to be honest with you.” Here are a few other honorable mentions that cause the same damage for the same reasons. In all honesty… Frankly… To tell you the truth… Truthfully or truthfully speaking… When you casually use a statement like “to be honest with you,” in an effort to ensure that you’re more likely to be believed, the exact opposite happens. Instead of trusting you more, listeners trust you less. ... Without leadership integrity, you’d have a very heavy lift trying to get people to believe in you, to listen to you, to count on you and to give you the benefit of the doubt that leaders so desperately need during times of uncertainty, ambiguity and crisis. This is why you don’t want to damage your leadership integrity or cause people to question your credibility by throwing out unthoughtful words or phrases that could give them pause. ... Instead of saying something like “mistakes were made,” which shows a complete lack of leadership integrity and sends the signal that someone somewhere made a mistake but you take no ownership for it. Go ahead and accept responsibility and show that you are accountable for the mistake and for the resolution as well.


Generative AI is not going to build your engineering team for you

Generative AI is like a junior engineer in that you can’t roll their code off into production. You are responsible for it—legally, ethically, and practically. You still have to take the time to understand it, test it, instrument it, retrofit it stylistically and thematically to fit the rest of your code base, and ensure your teammates can understand and maintain it as well. The analogy is a decent one, actually, but only if your code is disposable and self-contained, i.e. not meant to be integrated into a larger body of work, or to survive and be read or modified by others. And hey—there are corners of the industry like this, where most of the code is write-only, throwaway code. ... To state the supremely obvious: giving code review feedback to a junior engineer is not like editing generated code. Your effort is worth more when it is invested into someone else’s apprenticeship. It’s an opportunity to pass on the lessons you’ve learned in your own career. Even just the act of framing your feedback to explain and convey your message forces you to think through the problem in a more rigorous way, and has a way of helping you understand the material more deeply. And adding a junior engineer to your team will immediately change team dynamics. It creates an environment where asking questions is normalized and encouraged, where teaching as well as learning is a constant. 


Architectural Decision-Making: AI Tools as Consensus Builders

In an environment with lots of smart, quick-thinking people it can be a challenge to ensure everyone is heard, especially when the primary mode of interaction is videoconferencing. The online format (a Microsoft Teams group chat) gave people time to contribute their thoughts over a period of days rather than minutes. At various points in the online conversation, participants extracted content from the online discussion board and fed it to a large language model to compare ideas that were present in the dialogue, or to recast the dialogue in a particular person’s voice. ... The benefits of using AI tools are not cost free. It’s important to verify the results of an AI’s synthesis of text because sometimes the AI misinterprets what was written. For example, during our discussion of capabilities and domains, an AI tool interpreted some of my text as stating that the boundaries of a domain are context dependent when in fact, I was making the opposite argument – that a domain must have a consistent definition that is valid across any contexts in which it participates. Another consideration is the ethics of intellectual property ownership and citation of participants’ contributions. 


Perhaps the biggest challenge of IaC operations is drifts — a scenario where runtime environments deviate from their IaC-defined states, creating a festering issue that could have serious long-term implications. These discrepancies undermine the consistency of cloud environments, leading to potential issues with infrastructure reliability and maintainability and even significant security and compliance risks. ... But having additional context for drift, as important as it may be, is only one piece of a much bigger puzzle. Managing large cloud fleets with codified resources introduces more than just drift challenges, especially at scale. Current-gen IaC management tools are effective at addressing resource management, but the demand for greater visibility and control in enterprise-scale environments is introducing new requirements and driving their inevitable evolution. ... The combination of IaC management and CAM empowers teams to manage complexity with clarity and control. As the end of the year approaches, it's 'prediction season' — so here’s mine. Having spent the better part of the last decade building and refining one of the more popular IaC management platforms, I see this as the natural progression of our industry: combining IaC management, automation, and governance with enhanced visibility into non-codified assets.


4 keys for writing cross-platform apps

One big problem with cross-platform compiling is how asymmetrical it can be. If you’re a macOS user, it’s easy to set up and maintain Windows or Linux virtual machines on the Mac. If you use Linux or Windows, it’s harder to emulate macOS on those platforms. Not impossible, just more difficult—the biggest reason being the legal issues, as macOS’s EULA does not allow it to be used on non-Apple hardware. The easiest workaround is to simply buy a separate Macintosh system and use that. Another option is to use tools like osxcross to perform cross-compilation on a Linux, FreeBSD, or OpenBSD system. Another common option, one most in line with modern software delivery methods, is to use a system like GitHub Actions. The downside is paying for the use of the service, but if you’re already invested in either platform, it’s often the most economical and least messy approach. Plus, it keeps the burden of system maintenance out of your hands. ... The way we write and deploy apps is always in flux. Who would have anticipated the container revolution, for instance? Or predicted the dominant language for machine learning and AI would be Python? To that end, it’s always worth keeping an eye on the future, since cross-platform deployment is fast becoming a must-have feature.


The Connected Revolution: How Integrated Intelligence is Reshaping Drug Development

CI and end-to-end quality are dismantling traditional silos and fostering a seamless, data-driven ecosystem. The use of CI, potentially with data lakes as a way of consolidating vast amounts of data from disparate sources, removes silos that exist between independent systems sitting with siloed departments. The movement of data, for example clinical data that is needed in regulatory submissions, or safety data that is needed alongside regulatory data for regulatory reports, brings a level of fluidity to data management and helps companies optimize time and resources to generate product quality and safety insights. ... For clinical trials, CI and end-to-end quality can significantly enhance patient recruitment and retention. Advanced analytics can identify suitable candidates more efficiently, while real-time monitoring through connected devices can provide continuous data on patient responses and the identification of potential adverse events. This improves the quality of data collected, enhances patient safety and reduces trial time and cost. ... CI and AI-driven regulatory intelligence, in the context of quality-controlled procedures, can support the gathering of global submission requirements and the creation of global submission content, which will then be subject to human review as part of QC.



Quote for the day:

"A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves." -- Laotzu

Daily Tech Digest - January 01, 2025

The Architect’s Guide to Open Table Formats and Object Storage

Data lakehouse architectures are purposefully designed to leverage the scalability and cost-effectiveness of object storage systems, such as Amazon Web Services (AWS) S3, Google Cloud Storage and Azure Blob Storage. ... Data lakehouse architectures are purposefully designed to leverage the scalability and cost-effectiveness of object storage systems, such as Amazon Web Services (AWS) S3, Google Cloud Storage and Azure Blob Storage. This integration enables the seamless management of diverse data types — structured, semi-structured and unstructured — within a unified platform. ... The open table formats also incorporate features designed to boost performance. These also need to be configured properly and leveraged for a fully optimized stack. One such feature is efficient metadata handling, where metadata is managed separately from the data, which enables faster query planning and execution. Data partitioning organizes data into subsets, improving query performance by reducing the amount of data scanned during operations. Support for schema evolution allows table formats to adapt to changes in data structure without extensive data rewrites, ensuring flexibility while minimizing processing overhead.


The future of open source will be messy

First, it’s important to point out that open source software is both pervasive and foundational. Where would we be without Linux and the vast treasure trove of other open source projects on which the internet is built? However, the vast majority of software, written for use or sale, is not open source. This has always been true. Developers do care about open source, and for good reason, but it is not their top concern. As Redis CEO Rowan Trollope told me in a recent interview, “If you’re the average developer, what you really care about is capability: Does this [software] offer something unique and differentiated that’s awesome that I need in my application.” ... Meanwhile, Meta and the rest of the industry keep releasing new code, calling it open source or open weights (Sam Johnston offers a great analysis), without much concern for what the OSI or anyone else thinks. Johnston may be exaggerating when he says, “The more [the word] open appears in an artificial intelligence product’s branding, the less open it actually tends to be,” but it’s clear that the term open gets used a lot, starting with category leader OpenAI, which is not open in any discernible sense, without much concern for any traditional definitions. 


What’s next for generative AI in 2025?

“Data is the lifeblood of any AI initiative, and the success of these projects hinges on the quality of the data that feeds the models,” said Andrew Joiner, CEO of Hyperscience, which develops AI-based office work automation tools. “Alarmingly, three out of five decision makers report their lack of understanding of their own data inhibits their ability to utilize genAI to its maximum potential. The true potential…lies in adopting tailored SLMs, which can transform document processing and enhance operational efficiency.” Gartner recommends that organizations customize SLMs to specific needs for better accuracy, robustness, and efficiency. “Task specialization improves alignment, while embedding static organizational knowledge reduces costs. Dynamic information can still be provided as needed, making this hybrid approach both effective and efficient,” the research firm said. ... While Agentic AI architectures are a top emerging technology, they’re still two years away from reaching the lofty automation expected of them, according to Forrester. While companies are eager to push genAI into complex tasks through AI agents, the technology remains challenging to develop because it mostly relies on synergies between multiple models, customization through retrieval augmented generation (RAG), and specialized expertise. 


The Perils of Security Debt: Serious Pitfalls to Avoid

Security debt is caused by a failure to “build security in” to software from the design to deployment as part of the SDLC. Security debt accumulates when a development organization releases software with known issues, deferring the redressal of its weaknesses and vulnerabilities. Sometimes the organization skips certain test cases or scenarios in pursuit of faster deployment and in the process failing to test software thoroughly. Sometimes the business decides that the pressure to finish a project is so great that it makes more sense to release now and fix issues later. Later is better than never, but when “later” never arrives, existing security debt becomes worse. ... Great leadership is the beacon that not only charts the course but also ensures your crew – your IT team, support staff, and engineers – are well-prepared to face the challenges ahead. It instills discipline, vigilance, and a culture of security that can withstand the fiercest digital storms. The Board and leadership must understand and champion the importance of security for the organization. By setting the tone at the top, they can drive the cultural and procedural changes needed to prevent the accumulation of the security debt. Periodic review and monitoring of security metrics, and identifying & tracking security debt as a risk can help keep the organization accountable and on track.


The long-term impacts of AI on networking

Every enterprise who self-hosted AI told me the mission demanded more bandwidth to support “horizontal” traffic than their normal applications, more than their current data center needed to support. Ten of the group said that this meant they’d need the “cluster” of AI servers to have faster Ethernet connections and higher-capacity switches. Everyone agreed that a real production deployment of on-premises AI would need new network devices, and fifteen said they bought new switches even for their large-scale trials. The biggest problem with the data center network I heard from those with experience is that they believed they built up more of an AI cluster than they needed. Running a popular LLM, they said, requires hundreds of GPUs and servers, but small language models can run on a single system, and a third of current self-hosting enterprises said they believed it is best to start small, with small models, and build up only when you had experience and could demonstrate a need. This same group also pointed out that control was needed to ensure only truly useful AI applications where run. “Applications otherwise build up, exceed, and then increase, the size of the AI cluster,” said users. 


Bridging Skill Gaps in the Automotive Industry with AI-Led Immersive Simulations

This crisis of personnel shortfall is particularly acute in sectors like autonomous driving and AI-driven manufacturing, where the required skillset surpasses the capabilities of the current workforce. This alarming shortage of specialised expertise poses a serious threat to the industry’s progress. It could potentially lead to production halts at various facilities, delay the launch of next-generation vehicles, and hinder the transition to self-driving cars powered by sustainable energy. In order to address this issue, orthodox educational methods must be modernised to incorporate cutting-edge technologies like AI and robotics. ... Unlike traditional training, which often involves static lessons or expensive hands-on practice, immersive simulations allow workers to practice in environments that would be too risky or costly in real life. For example, with autonomous vehicles, workers can practice fixing and calibrating vehicle systems in a virtual world without the risk of damaging anything. These simulations can also create different road conditions for workers to experience, helping them build critical decision-making skills without real-world consequences. 


AI agents might be the new workforce, but they still need a manager

AI agents need to be thoughtfully managed, just as is the case with human work, and there's work to be done before an agentic AI-driven workforce can truly assume a broad range of tasks. "While the promise of agentic AI is evident, we are several years away from widespread agentic AI adoption at the enterprise level," said Scott Beechuk, partner with Norwest Venture Partners. "Agents must be trustworthy given their potential role in automating mission-critical business processes." The traceability of AI agents' actions is one issue. "Many tools have a hard time explaining how they arrived at their responses from users' sensitive data and models struggle to generalize beyond what they have learned," said Ananthakrishnan. ... Unpredictability is a related challenge, as LLMs "operate like black boxes," said Beechuk. "It's hard for users and engineers to know if the AI has successfully completed its task and if it did so correctly." ... Human workers also are capable of collaborating easily and on a regular basis. For AI workers, it's a different story. "Because agents will interact with multiple systems and data stores, achieving comprehensive visibility is no easy task," said Ananthakrishnan. It's important to have visibility to capture each action an agent takes.


Change management: Achieve your goals with the right change model

You need a good leadership team of influential people who are all pulling in the same direction. This is the only way to implement upcoming changes and anchor them in the company. It is important to include people in the leadership team who have a great deal of influence and/or are well respected by the workforce. At the same time, these people must be fully committed to the planned change. ... Communication comes before implementation. Those affected must understand it to become participants or supporters. Initiating measures without first explaining the context to those involved would unnecessarily create unrest in the company. When communicating, it makes sense to proceed in several steps: the change team first informs the clients and gets a “go” from them. After that, the change team informs the managers so that they can answer questions from employees during company-wide communication. ... Quick wins must be realized and made visible to increase motivation. Quick wins should therefore also be identified when defining objectives, because success is important to ensure that the initial motivation does not fizzle out. Initial successes should be related to the overarching goal, because then they strengthen intrinsic motivation. Small successes can thus have a big impact.


Forrester on cybersecurity budgeting: 2025 will be the year of CISO fiscal accountability

Forrester sees the increasing adoption of AI and generative AI (gen AI) as driving the needed updates to infrastructure. “Any Gen AI project that we discussed with customers ultimately becomes a data integration project,” says Pascal Matska, vice president and research director at Forrester. “You have to invest into specific capabilities and platforms that run specific AI workloads in the most suitable infrastructure at the right price point, and also drive investments into cloud-native technologies such as Kubernetes and containers and modern data platforms that really are there to help you drive out some of the frictions that exist within the different business silos,” Matska continued. ... CISOs who drive gains in revenue advance their careers. “When something touches as much revenue as cybersecurity does, it is a core competency. And you can’t argue that it isn’t,” Jeff Pollard, VP and principal analyst at Forrester, said during his keynote titled “Cybersecurity Drives Revenue: How to Win Every Budget Battle” at the company’s Security and Risk Forum in 2022. Budgeting to protect revenue needs to start with the weakest, most at-risk areas. These include software supply chain security, API security, human risk management, and IoT/OT threat detection. 


Passkey technology is elegant, but it’s most definitely not usable security

"The problem with passkeys is that they're essentially a halfway house to a password manager, but tied to a specific platform in ways that aren't obvious to a user at all, and liable to easily leave them unable to access ... their accounts," wrote the Danish software engineer and programmer, who created Ruby on Rails and is the CTO of web-based software development firm 37signals. "Much the same way that two-factor authentication can do, but worse, since you're not even aware of it." ... The security benefits of passkeys at the moment are also undermined by an undeniable truth. Of the hundreds of sites supporting passkeys, there isn't one I know of that allows users to ditch their password completely. The password is still mandatory. And with the exception of Google's Advanced Protection Program, I know of no sites that won't allow logins to fall back on passwords, often without any additional factor. ... Under the FIDO2 spec, the passkey can never leave the security key, except as an encrypted blob of bits when the passkey is being synced from one device to another. The secret key can be unlocked only when the user authenticates to the physical key using a PIN, password, or most commonly a fingerprint or face scan. In the event the user authenticates with a biometric, it never leaves the security key, just as they never leave Android and iOS phones and computers running macOS or Windows.



Quote for the day:

"You are a true success when you help others be successful." -- Jon Gordon