Daily Tech Digest - January 03, 2025

Tech predictions 2025 – what could be in store next year?

In 2025, we will hear of numerous cases where threat actors trick a corporate Gen AI solution into giving up sensitive information and causing high-profile data breaches. Many enterprises are using Gen AI to build customer-facing chatbots, in order to aid everything from bookings to customer service. Indeed, in order to be useful, LLMs must ultimately be granted access to information and systems in order to answer questions and take actions that a human would otherwise have been tasked with. As with any new technology, we will witness numerous corporations grant LLMs access to huge amounts of potentially sensitive data, without appropriate security considerations. ... The future of work won’t be a binary choice between humans or machines.  It will be an “and.” AI-powered humanoids will form a part of the future workforce, and we will likely see the first instance happen next year. This will force companies to completely reimagine their workplace dynamics – and the technology that powers them. ... At the same time, organisations must ensure their security postures keep pace. Not only to ensure the data being processed by humanoids is kept safe, but also to keep the humanoids safeguarded from hacking and threatening tweaks to their software and commands. 


7 Private Cloud Trends to Watch in 2025

A lot of organizations are repatriating workloads to private cloud from public cloud, but Rick Clark, global head of cloud advisory at digital transformation solutions company UST warns they aren’t giving it much forethought, like they did earlier when migrating to public clouds. As a result, they’re not getting the ROI they hope for. “We haven’t still figured out what is appropriate for workloads. I’m seeing companies wanting to move back the percentage of their workload to reduce cost without really understanding what the value is so they’re devaluing what they're doing,” says Clark. ... Artificial intelligence and automation are also set to play a crucial role in private cloud management. They enable businesses to handle growing complexity by automating resource optimization, enhancing threat detection, and managing costs. “The ongoing talent shortage in cybersecurity makes [AI and automation] especially valuable. By reducing manual workloads, AI allows companies to do more with fewer resources,” says Trevor Horwitz, CISO and founder at cybersecurity, consulting, and compliance services provider. ... Security affects all aspects of a cloud journey, including the calculus of when and where to use private cloud environments. One significant challenge is making sure that all layers of the stack have detection and response capability.


Agility in Action: Elevating Safety through Facial Recognition

Facial Recognition Technology (FRT) stands out as a leading solution to these problems, protecting not only the physical boundaries but also the organization’s overall integrity. Through precise identity verification and user validation, FRT considerably lowers the possibility of unauthorized access. Organizations, irrespective of size, can benefit from this technology, which offers improved security and operational effectiveness. ... A comprehensive physical security program with interconnected elements serves as the backbone of any security infrastructure. Regulating who can enter or exit a facility is vital. Effective systems include traditional mechanical methods, such as locks and keys, as well as electronic solutions like RFID cards. By using these methods, only authorized persons are able to enter. Nonetheless, a technological solution that works with many Original Equipment Manufacturers (OEMs) is required to successfully counter today’s dangers. In addition to guaranteeing general user convenience, this technology should give top priority to data privacy and safety compliance.
Effective physical security is built on deterring unauthorized entry and identifying people of interest. This can include anything from physical security personnel to surveillance and access control systems.


Strategies for Managing Data Debt in Growing Organizations

Not all data debt is created equal. Growing organizations experiencing data sprawl at an expanding rate must conduct a thorough impact assessment to determine which aspects of their data debt are most harmful to operational efficiency and strategic initiatives. An effective approach involves quantifying the potential risks associated with each type of debt – such as compliance violations or lost customer insights – and calculating the opportunity cost of maintaining versus mitigating them. ... A core approach to managing data debt is to establish strong data governance practices that address inconsistencies and fragmentation. Before anything else, you must establish an adequate access control system and ensure its imperviousness. Next, you must think about implementing robust validation mechanisms that will help prevent further debt accumulation. Data governance frameworks provide a foundation for minimizing ad hoc fixes, which are the primary drivers of data debt. ... An architectural shift that facilitates scalability can help avoid the bottlenecks that arise when data outgrows its infrastructure. Technologies like cloud platforms offer scalability without heavy up-front investments, allowing organizations to expand their capacity in line with their growth.


Secure by design vs by default – which software development concept is better?

The challenge here is that, while from a security perspective we may agree that it is wise, it could inevitably put developers and vendors at a competitive disadvantage. Those who don’t prioritize secure-by-design can get features, functionality, and products out to market faster, leading to potentially more market share, revenue, customer attraction/retention, and more. Additionally, many vendors are venture-capital backed, which comes with expectations of return on investment — and the reality that cyber is just one of many risks their business is facing. They must maintain market share, hit revenue targets, deliver customer satisfaction, raise brand awareness/exposure, and achieve the most advantageous business outcomes. ... Secure-by-default development focuses on ensuring that software components arrive at the end-user with all security features and functions fully implemented, with the goal of providing maximum security right out of the box. Most cyber professionals have experienced having to apply CIS Benchmarks, DISA STIGs, vendor guidance and so on to harden a new product or software to ensure we reduce its attack surface. Secure-by-default flips that paradigm on its head so that products arrive hardened and require customers to roll back or loosen the hardened configurations to tailor them to their needs.


The modern CISO is a cornerstone of organizational success

Historically, CISOs focused on technical responsibilities, including managing firewalls, monitoring networks, and responding to breaches. Today, they are integral to the C-suite, contributing to decisions that align security initiatives with organizational goals. This shift in responsibilities reflects the growing realization that security is not just an IT function but a critical enabler of business goals, customer trust, and competitive advantage. CISOs are increasingly embedded in the strategic planning process, ensuring that cybersecurity initiatives support overall business goals rather than operate as standalone activities. ... One of the most critical aspects of the modern CISO role is integrating security into operational processes without disrupting productivity. This involves working closely with operations teams to design workflows prioritizing efficiency and security. This aspect of their responsibility ensures that security does not become a bottleneck for business operations but enhances operational resilience, efficiency, and productivity. ... The CISO of tomorrow will redefine success by aligning cybersecurity with business objectives, fostering a culture of shared responsibility, and driving resilience in the face of emerging risks like AI-driven attacks, quantum threats, and global regulatory pressures.


Key Infrastructure Modernization Trends for Enterprises

Cloud providers and data centers need advanced cooling technologies, including rear-door heat exchange, immersion and direct-to-chip systems. Sustainable power sources such as solar and wind must supplement traditional energy resources. These infrastructure changes will support new chip generations, increased rack densities and expanding AI requirements while enabling edge computing use cases. "Liquid cooling has evolved to move from cooling the broader data center environment to getting closer and even within the infrastructure," Hewitt said. "Liquid-cooled infrastructure remains niche today in terms of use cases but will become more predominant as next generations of GPUs and CPUs increase in power consumption and heat production." ... Document existing business processes and workflows to improve visibility and identify gaps suitable for AI implementation. Organizations must organize data for AI tools that can bring in improvements, keep track of where the data resides to organize it for AI use, build internal guidelines for training and testing AI-driven workflows, and create robust controls for processes that incorporate AI agents.


Being Functionless: How to Develop a Serverless Mindset to Write Less Code!

As the adoption of FaaS increased, cloud providers added a variety of language runtimes to cater to different computational needs, skills, etc., offering something for most programmers. Language runtimes such as Java, .NET, Node.js, Python, Ruby, Go, etc., are the most popular and widely adopted. However, this also brings some challenges to organizations adopting serverless technology. More than technology challenges, these are mindset challenges for engineers. ... Sustainability is a crucial aspect of modern cloud operation. Consuming renewable energy, reducing carbon footprint, and achieving green energy targets are top priorities for cloud providers. Cloud providers invest in efficient power and cooling technologies and operate an efficient server population to achieve higher utilization. For this reason, AWS recommends using managed services for efficient cloud operation, as part of their Well-Architected Framework best practices for sustainability. ... For engineers new to serverless, equipping their minds to its needs can be challenging. Hence, you hear about the serverless mindset as a prerequisite to adopting serverless. This is because working with serverless requires a new way of thinking, developing, and operating applications in the cloud. 


Unlocking opportunities for growth with sovereign cloud

Although there is no standard definition of what constitutes a “sovereign cloud,” there is a general understanding that it must ensure sovereignty at three fundamental levels: data, operations, and infrastructure. Sovereign cloud solutions, therefore, have highly demanding requirements when it comes to digital security and the protection of sensitive data, from technical, operational, and legal perspectives. The sovereign cloud concept also opens up avenues for competition and innovation, particularly among local cloud service providers within the UK. In a recent PwC survey, 78% of UK business leaders said they have adopted cloud in most or all areas of their organisations. However, many of these cloud providers operate and function outside of the country, usually across the pond. The development of sovereign cloud offerings provides the perfect push for UK cloud service providers to increase their market share, providing local tools to power local innovation. For a large-scale, accessible, and competitive sovereign cloud ecosystem to emerge, a combination of certain factors is essential. Firstly, partnerships are crucial. Developing local sovereign cloud solutions that offer the same benefits and ease of use as large hyperscalers is a significant challenge.


The Tipping Point: India's Data Center Revolution

"Data explosion and data localization are paving the way for a data center revolution in India. The low data tariff plans, access to affordable smartphones, adoption of new technologies and growing user base of social media, e-commerce, gaming and OTT platforms are some of the key triggers for data explosion. Also, AI-led demand, which is expected to increase multi-fold in the next 3-5 years, presents significant opportunities. This, coupled with favourable regulatory policies from the Central and State governments, the draft Digital Personal Data Protection Bill, and the infrastructure status are supporting the growth prospects," said Anupama Reddy, Vice President and Co-Group Head - Corporate Ratings, ICRA. ... The high-octane data center industry comes with its own set of challenges. The data center industry faces high operational costs alongside challenges in scalability, cybersecurity, sustainability, and skilled workforce. Power and cooling are major cost drivers, with data centers consuming 1-1.5 per cent of global electricity. Advanced cooling solutions and energy-efficient hardware can help reduce energy costs while supporting environmental goals.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree

Daily Tech Digest - January 02, 2025

7 Practices to Bolster Cloud Security and Keep Attackers at Bay

AI tools can facilitate quicker threat detection, investigation, and response. All healthy cloud security postures should utilize ML-based user and entity behavior analytics (UEBA) tools. Such tools effectively identify anomalous behavior across the network, while facilitating rapid investigation of potential threats and automating responses to mitigate and remediate attacks. Ideally, security professionals want to find vulnerabilities before an attack occurs, and such AI tools can help to do just that. ... When a threat occurs in the cloud, it can sometimes be difficult to assess the potential impact across a distributed or multitenant surface. By utilizing a centralized platform, security personnel have access to a response center that can automate workflows by orchestrating with different cloud applications, which in turn reduces the mean time to resolve (MTTR) incidents and threats. ... By correlating access and security logs from cloud applications, security personnel can identify attempts at data exfiltration from the cloud. As a quick example, if a SOC professional is investigating potential customer data exfiltration from a cloud-based CRM tool, he or she would want to correlate the logs of that CRM tool with the logs of other cloud applications, such as email or team communication tools. 


6 AI-Related Security Trends to Watch in 2025

As more organizations work to embed AI capabilities into their software, expect to see DevSecOps, DataOps, and ModelOps — or the practice of managing and monitoring AI models in production — converge into a broader, all-encompassing xOps management approach, Holt says. The push to AI-enabled software is increasingly blurring the lines between traditional declarative apps that follow predefined rules to achieve specific outcomes, and LLMs and GenAI apps that dynamically generate responses based on patterns learned from training data sets, Holt says. ... The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. ... The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams. ... "If unchecked, this raises serious questions and concerns about data loss prevention as well as compliance concerns as new regulations like the EU AI Act start to take effect," she says. 


Working in Cyber Threat Intelligence (CTI)

“The analysis of an adversary’s intent, opportunity, and capability to do harm is known as cyber threat intelligence.” It’s not just about finding some IOCs and sending them to the SOC. It’s about providing context about adversary activity for other security teams to help prioritize cyber defense efforts. While there are more steps than this, in short we collect intrusion data and analyze it, looking for correlations and trends to observed malicious activity. With that analyzed activity and trends, we can provide actionable insights into malicious activity to keep defenders focused only on the most relevant. ... Aside from everything in the “What CTI Isn’t” section, the biggest challenge in CTI is that it’s next to impossible to get decent intel requirements. “Just get us intel” isn’t a thing. We need information to give relevant information. What strategic initiatives, products, technologies, partnerships, etc. are of particular interest to the leadership? What are all of your countries of operation? What are considered the most critical assets? How would a threat actor achieving their objectives impede the organization’s mission? It unfortunately is an ongoing problem that many CTI analysts and CTI management struggle with. This often leads to intel analysts winging it.


What’s Ahead in Generative AI in 2025?

In the coming year, prompt engineering will continue its rapid maturation into a substantial body of proven practices for eliciting the correct output from LLMs and other foundation models. Within generative AI development tool sets, embedding libraries will become an essential component for developers to build increasingly sophisticated similarity searches that span a diverse range of data modalities. The recent TDWI survey on enterprise AI readiness shows that 28% of organizations already use or are deploying vector databases to store vector embeddings for use with AI models, while 32% plan to adopt those databases in the next few years. In addition, generative AI developers in 2025 will have access to a growing range of tools for no-code development of “agentic” applications that provide autonomous LLM-driven copilot, chatbot, and other functionality and that can be orchestrated over more complex process environments. ... Developers will have access in 2025 to a growing range of sophisticated models and data for building, training, and optimizing generative AI applications—including both commercial and open-source models. The recent TDWI survey on data and analytics trends showed that around 25% of enterprises are experimenting with private or public generative AI models, while 17% are building generative AI apps that use company data with pretrained models. 


This Is The Phrase That Instantly Damages Your Leadership Integrity

There are few phrases that have the ability to instantly cause hesitation like the phrase “to be honest with you.” Here are a few other honorable mentions that cause the same damage for the same reasons. In all honesty… Frankly… To tell you the truth… Truthfully or truthfully speaking… When you casually use a statement like “to be honest with you,” in an effort to ensure that you’re more likely to be believed, the exact opposite happens. Instead of trusting you more, listeners trust you less. ... Without leadership integrity, you’d have a very heavy lift trying to get people to believe in you, to listen to you, to count on you and to give you the benefit of the doubt that leaders so desperately need during times of uncertainty, ambiguity and crisis. This is why you don’t want to damage your leadership integrity or cause people to question your credibility by throwing out unthoughtful words or phrases that could give them pause. ... Instead of saying something like “mistakes were made,” which shows a complete lack of leadership integrity and sends the signal that someone somewhere made a mistake but you take no ownership for it. Go ahead and accept responsibility and show that you are accountable for the mistake and for the resolution as well.


Generative AI is not going to build your engineering team for you

Generative AI is like a junior engineer in that you can’t roll their code off into production. You are responsible for it—legally, ethically, and practically. You still have to take the time to understand it, test it, instrument it, retrofit it stylistically and thematically to fit the rest of your code base, and ensure your teammates can understand and maintain it as well. The analogy is a decent one, actually, but only if your code is disposable and self-contained, i.e. not meant to be integrated into a larger body of work, or to survive and be read or modified by others. And hey—there are corners of the industry like this, where most of the code is write-only, throwaway code. ... To state the supremely obvious: giving code review feedback to a junior engineer is not like editing generated code. Your effort is worth more when it is invested into someone else’s apprenticeship. It’s an opportunity to pass on the lessons you’ve learned in your own career. Even just the act of framing your feedback to explain and convey your message forces you to think through the problem in a more rigorous way, and has a way of helping you understand the material more deeply. And adding a junior engineer to your team will immediately change team dynamics. It creates an environment where asking questions is normalized and encouraged, where teaching as well as learning is a constant. 


Architectural Decision-Making: AI Tools as Consensus Builders

In an environment with lots of smart, quick-thinking people it can be a challenge to ensure everyone is heard, especially when the primary mode of interaction is videoconferencing. The online format (a Microsoft Teams group chat) gave people time to contribute their thoughts over a period of days rather than minutes. At various points in the online conversation, participants extracted content from the online discussion board and fed it to a large language model to compare ideas that were present in the dialogue, or to recast the dialogue in a particular person’s voice. ... The benefits of using AI tools are not cost free. It’s important to verify the results of an AI’s synthesis of text because sometimes the AI misinterprets what was written. For example, during our discussion of capabilities and domains, an AI tool interpreted some of my text as stating that the boundaries of a domain are context dependent when in fact, I was making the opposite argument – that a domain must have a consistent definition that is valid across any contexts in which it participates. Another consideration is the ethics of intellectual property ownership and citation of participants’ contributions. 


Perhaps the biggest challenge of IaC operations is drifts — a scenario where runtime environments deviate from their IaC-defined states, creating a festering issue that could have serious long-term implications. These discrepancies undermine the consistency of cloud environments, leading to potential issues with infrastructure reliability and maintainability and even significant security and compliance risks. ... But having additional context for drift, as important as it may be, is only one piece of a much bigger puzzle. Managing large cloud fleets with codified resources introduces more than just drift challenges, especially at scale. Current-gen IaC management tools are effective at addressing resource management, but the demand for greater visibility and control in enterprise-scale environments is introducing new requirements and driving their inevitable evolution. ... The combination of IaC management and CAM empowers teams to manage complexity with clarity and control. As the end of the year approaches, it's 'prediction season' — so here’s mine. Having spent the better part of the last decade building and refining one of the more popular IaC management platforms, I see this as the natural progression of our industry: combining IaC management, automation, and governance with enhanced visibility into non-codified assets.


4 keys for writing cross-platform apps

One big problem with cross-platform compiling is how asymmetrical it can be. If you’re a macOS user, it’s easy to set up and maintain Windows or Linux virtual machines on the Mac. If you use Linux or Windows, it’s harder to emulate macOS on those platforms. Not impossible, just more difficult—the biggest reason being the legal issues, as macOS’s EULA does not allow it to be used on non-Apple hardware. The easiest workaround is to simply buy a separate Macintosh system and use that. Another option is to use tools like osxcross to perform cross-compilation on a Linux, FreeBSD, or OpenBSD system. Another common option, one most in line with modern software delivery methods, is to use a system like GitHub Actions. The downside is paying for the use of the service, but if you’re already invested in either platform, it’s often the most economical and least messy approach. Plus, it keeps the burden of system maintenance out of your hands. ... The way we write and deploy apps is always in flux. Who would have anticipated the container revolution, for instance? Or predicted the dominant language for machine learning and AI would be Python? To that end, it’s always worth keeping an eye on the future, since cross-platform deployment is fast becoming a must-have feature.


The Connected Revolution: How Integrated Intelligence is Reshaping Drug Development

CI and end-to-end quality are dismantling traditional silos and fostering a seamless, data-driven ecosystem. The use of CI, potentially with data lakes as a way of consolidating vast amounts of data from disparate sources, removes silos that exist between independent systems sitting with siloed departments. The movement of data, for example clinical data that is needed in regulatory submissions, or safety data that is needed alongside regulatory data for regulatory reports, brings a level of fluidity to data management and helps companies optimize time and resources to generate product quality and safety insights. ... For clinical trials, CI and end-to-end quality can significantly enhance patient recruitment and retention. Advanced analytics can identify suitable candidates more efficiently, while real-time monitoring through connected devices can provide continuous data on patient responses and the identification of potential adverse events. This improves the quality of data collected, enhances patient safety and reduces trial time and cost. ... CI and AI-driven regulatory intelligence, in the context of quality-controlled procedures, can support the gathering of global submission requirements and the creation of global submission content, which will then be subject to human review as part of QC.



Quote for the day:

"A leader is best when people barely know he exists, when his work is done, his aim fulfilled, they will say: we did it ourselves." -- Laotzu

Daily Tech Digest - January 01, 2025

The Architect’s Guide to Open Table Formats and Object Storage

Data lakehouse architectures are purposefully designed to leverage the scalability and cost-effectiveness of object storage systems, such as Amazon Web Services (AWS) S3, Google Cloud Storage and Azure Blob Storage. ... Data lakehouse architectures are purposefully designed to leverage the scalability and cost-effectiveness of object storage systems, such as Amazon Web Services (AWS) S3, Google Cloud Storage and Azure Blob Storage. This integration enables the seamless management of diverse data types — structured, semi-structured and unstructured — within a unified platform. ... The open table formats also incorporate features designed to boost performance. These also need to be configured properly and leveraged for a fully optimized stack. One such feature is efficient metadata handling, where metadata is managed separately from the data, which enables faster query planning and execution. Data partitioning organizes data into subsets, improving query performance by reducing the amount of data scanned during operations. Support for schema evolution allows table formats to adapt to changes in data structure without extensive data rewrites, ensuring flexibility while minimizing processing overhead.


The future of open source will be messy

First, it’s important to point out that open source software is both pervasive and foundational. Where would we be without Linux and the vast treasure trove of other open source projects on which the internet is built? However, the vast majority of software, written for use or sale, is not open source. This has always been true. Developers do care about open source, and for good reason, but it is not their top concern. As Redis CEO Rowan Trollope told me in a recent interview, “If you’re the average developer, what you really care about is capability: Does this [software] offer something unique and differentiated that’s awesome that I need in my application.” ... Meanwhile, Meta and the rest of the industry keep releasing new code, calling it open source or open weights (Sam Johnston offers a great analysis), without much concern for what the OSI or anyone else thinks. Johnston may be exaggerating when he says, “The more [the word] open appears in an artificial intelligence product’s branding, the less open it actually tends to be,” but it’s clear that the term open gets used a lot, starting with category leader OpenAI, which is not open in any discernible sense, without much concern for any traditional definitions. 


What’s next for generative AI in 2025?

“Data is the lifeblood of any AI initiative, and the success of these projects hinges on the quality of the data that feeds the models,” said Andrew Joiner, CEO of Hyperscience, which develops AI-based office work automation tools. “Alarmingly, three out of five decision makers report their lack of understanding of their own data inhibits their ability to utilize genAI to its maximum potential. The true potential…lies in adopting tailored SLMs, which can transform document processing and enhance operational efficiency.” Gartner recommends that organizations customize SLMs to specific needs for better accuracy, robustness, and efficiency. “Task specialization improves alignment, while embedding static organizational knowledge reduces costs. Dynamic information can still be provided as needed, making this hybrid approach both effective and efficient,” the research firm said. ... While Agentic AI architectures are a top emerging technology, they’re still two years away from reaching the lofty automation expected of them, according to Forrester. While companies are eager to push genAI into complex tasks through AI agents, the technology remains challenging to develop because it mostly relies on synergies between multiple models, customization through retrieval augmented generation (RAG), and specialized expertise. 


The Perils of Security Debt: Serious Pitfalls to Avoid

Security debt is caused by a failure to “build security in” to software from the design to deployment as part of the SDLC. Security debt accumulates when a development organization releases software with known issues, deferring the redressal of its weaknesses and vulnerabilities. Sometimes the organization skips certain test cases or scenarios in pursuit of faster deployment and in the process failing to test software thoroughly. Sometimes the business decides that the pressure to finish a project is so great that it makes more sense to release now and fix issues later. Later is better than never, but when “later” never arrives, existing security debt becomes worse. ... Great leadership is the beacon that not only charts the course but also ensures your crew – your IT team, support staff, and engineers – are well-prepared to face the challenges ahead. It instills discipline, vigilance, and a culture of security that can withstand the fiercest digital storms. The Board and leadership must understand and champion the importance of security for the organization. By setting the tone at the top, they can drive the cultural and procedural changes needed to prevent the accumulation of the security debt. Periodic review and monitoring of security metrics, and identifying & tracking security debt as a risk can help keep the organization accountable and on track.


The long-term impacts of AI on networking

Every enterprise who self-hosted AI told me the mission demanded more bandwidth to support “horizontal” traffic than their normal applications, more than their current data center needed to support. Ten of the group said that this meant they’d need the “cluster” of AI servers to have faster Ethernet connections and higher-capacity switches. Everyone agreed that a real production deployment of on-premises AI would need new network devices, and fifteen said they bought new switches even for their large-scale trials. The biggest problem with the data center network I heard from those with experience is that they believed they built up more of an AI cluster than they needed. Running a popular LLM, they said, requires hundreds of GPUs and servers, but small language models can run on a single system, and a third of current self-hosting enterprises said they believed it is best to start small, with small models, and build up only when you had experience and could demonstrate a need. This same group also pointed out that control was needed to ensure only truly useful AI applications where run. “Applications otherwise build up, exceed, and then increase, the size of the AI cluster,” said users. 


Bridging Skill Gaps in the Automotive Industry with AI-Led Immersive Simulations

This crisis of personnel shortfall is particularly acute in sectors like autonomous driving and AI-driven manufacturing, where the required skillset surpasses the capabilities of the current workforce. This alarming shortage of specialised expertise poses a serious threat to the industry’s progress. It could potentially lead to production halts at various facilities, delay the launch of next-generation vehicles, and hinder the transition to self-driving cars powered by sustainable energy. In order to address this issue, orthodox educational methods must be modernised to incorporate cutting-edge technologies like AI and robotics. ... Unlike traditional training, which often involves static lessons or expensive hands-on practice, immersive simulations allow workers to practice in environments that would be too risky or costly in real life. For example, with autonomous vehicles, workers can practice fixing and calibrating vehicle systems in a virtual world without the risk of damaging anything. These simulations can also create different road conditions for workers to experience, helping them build critical decision-making skills without real-world consequences. 


AI agents might be the new workforce, but they still need a manager

AI agents need to be thoughtfully managed, just as is the case with human work, and there's work to be done before an agentic AI-driven workforce can truly assume a broad range of tasks. "While the promise of agentic AI is evident, we are several years away from widespread agentic AI adoption at the enterprise level," said Scott Beechuk, partner with Norwest Venture Partners. "Agents must be trustworthy given their potential role in automating mission-critical business processes." The traceability of AI agents' actions is one issue. "Many tools have a hard time explaining how they arrived at their responses from users' sensitive data and models struggle to generalize beyond what they have learned," said Ananthakrishnan. ... Unpredictability is a related challenge, as LLMs "operate like black boxes," said Beechuk. "It's hard for users and engineers to know if the AI has successfully completed its task and if it did so correctly." ... Human workers also are capable of collaborating easily and on a regular basis. For AI workers, it's a different story. "Because agents will interact with multiple systems and data stores, achieving comprehensive visibility is no easy task," said Ananthakrishnan. It's important to have visibility to capture each action an agent takes.


Change management: Achieve your goals with the right change model

You need a good leadership team of influential people who are all pulling in the same direction. This is the only way to implement upcoming changes and anchor them in the company. It is important to include people in the leadership team who have a great deal of influence and/or are well respected by the workforce. At the same time, these people must be fully committed to the planned change. ... Communication comes before implementation. Those affected must understand it to become participants or supporters. Initiating measures without first explaining the context to those involved would unnecessarily create unrest in the company. When communicating, it makes sense to proceed in several steps: the change team first informs the clients and gets a “go” from them. After that, the change team informs the managers so that they can answer questions from employees during company-wide communication. ... Quick wins must be realized and made visible to increase motivation. Quick wins should therefore also be identified when defining objectives, because success is important to ensure that the initial motivation does not fizzle out. Initial successes should be related to the overarching goal, because then they strengthen intrinsic motivation. Small successes can thus have a big impact.


Forrester on cybersecurity budgeting: 2025 will be the year of CISO fiscal accountability

Forrester sees the increasing adoption of AI and generative AI (gen AI) as driving the needed updates to infrastructure. “Any Gen AI project that we discussed with customers ultimately becomes a data integration project,” says Pascal Matska, vice president and research director at Forrester. “You have to invest into specific capabilities and platforms that run specific AI workloads in the most suitable infrastructure at the right price point, and also drive investments into cloud-native technologies such as Kubernetes and containers and modern data platforms that really are there to help you drive out some of the frictions that exist within the different business silos,” Matska continued. ... CISOs who drive gains in revenue advance their careers. “When something touches as much revenue as cybersecurity does, it is a core competency. And you can’t argue that it isn’t,” Jeff Pollard, VP and principal analyst at Forrester, said during his keynote titled “Cybersecurity Drives Revenue: How to Win Every Budget Battle” at the company’s Security and Risk Forum in 2022. Budgeting to protect revenue needs to start with the weakest, most at-risk areas. These include software supply chain security, API security, human risk management, and IoT/OT threat detection. 


Passkey technology is elegant, but it’s most definitely not usable security

"The problem with passkeys is that they're essentially a halfway house to a password manager, but tied to a specific platform in ways that aren't obvious to a user at all, and liable to easily leave them unable to access ... their accounts," wrote the Danish software engineer and programmer, who created Ruby on Rails and is the CTO of web-based software development firm 37signals. "Much the same way that two-factor authentication can do, but worse, since you're not even aware of it." ... The security benefits of passkeys at the moment are also undermined by an undeniable truth. Of the hundreds of sites supporting passkeys, there isn't one I know of that allows users to ditch their password completely. The password is still mandatory. And with the exception of Google's Advanced Protection Program, I know of no sites that won't allow logins to fall back on passwords, often without any additional factor. ... Under the FIDO2 spec, the passkey can never leave the security key, except as an encrypted blob of bits when the passkey is being synced from one device to another. The secret key can be unlocked only when the user authenticates to the physical key using a PIN, password, or most commonly a fingerprint or face scan. In the event the user authenticates with a biometric, it never leaves the security key, just as they never leave Android and iOS phones and computers running macOS or Windows.



Quote for the day:

"You are a true success when you help others be successful." -- Jon Gordon

Daily Tech Digest - December 30, 2024

Top Considerations To Keep In Mind When Designing Your Enterprise Observability Framework

Observability goes beyond traditional monitoring tools, offering a holistic approach that aggregates data from diverse sources to provide actionable insights. While Application Performance Monitoring (APM) once sufficed for tracking application health, the increasing complexity of distributed, multi-cloud environments has made it clear that a broader, more integrated strategy is essential. Modern observability frameworks now focus on real-time analytics, root cause identification, and proactive risk mitigation. ... Business optimization and cloud modernization often face resistance from teams and stakeholders accustomed to existing tools and workflows. To overcome this, it’s essential to clearly communicate the motivations behind adopting a new observability strategy. Aligning these motivations with improved customer experiences and demonstrable ROI helps build organizational buy-in. Stakeholders are more likely to support changes when the outcomes directly benefit customers and contribute to business success. ... Enterprise observability systems must manage vast volumes of data daily, enabling near real-time analysis to ensure system reliability and performance. While this task can be costly and complex, it is critical for maintaining operational stability and delivering seamless user experiences.


Blown the cybersecurity budget? Here are 7 ways cyber pros can save money

David Chaddock, managing director, cybersecurity, at digital services firm West Monroe, advises CISOs to start by ensuring or improving their cyber governance to “spread the accountability to all the teams responsible for securing the environment.” “Everyone likes to say that the CISO is responsible and accountable for security, but most times they don’t own the infrastructure they’re securing or the budget for doing the maintenance, they don’t have influence over the applications with the security vulnerabilities, and they don’t control the resources to do the security work,” he says. ... Torok, Cooper and others acknowledge that implementing more automation and AI capabilities requires an investment. However, they say the investments can deliver returns (in increased efficiencies as well as avoided new salary costs) that exceed the costs to buy, deploy and run those new security tools. ... Ulloa says he also saves money by avoiding auto-renewals on contracts – thereby ensuring he can negotiate with vendors before inking the next deal. He acknowledges missing one contract set on auto renew and got stuck with a 54% increase. “That’s why you have to have a close eye on those renewals,” he adds.


7 Key Data Center Security Trends to Watch in 2025

Historically, securing both types of environments in a unified way was challenging because cloud security tools worked differently from the on-prem security solutions designed for data centers, and vice versa. Hybrid cloud frameworks, however, are helping to change this. They offer a consistent way of enforcing access controls and monitoring for security anomalies across both public cloud environments and workloads hosted in private data centers. Building a hybrid cloud to bring consistency to security and other operations is not a totally new idea. ... Edge data centers can help to boost workload performance by locating applications and data closer to end-users. But they also present some unique security challenges, due especially to the difficulty of ensuring physical security for small data centers in areas that lack traditional physical security protections. Nonetheless, as businesses face greater and greater pressure to optimize performance, demand for edge data centers is likely to grow. This will likely lead to greater investment in security solutions for edge data centers. ... Traditionally, data center security strategies typically hinged on establishing a strong perimeter and relying on it to prevent unauthorized access to the facility. 


What we talk about when we talk about ‘humanness’

Civic is confident enough in its mission to know where to draw the line between people and agglomerations of data. It says that “personhood is an inalienable human right which should not be confused with our digital shadows, which ultimately are simply tools to express that personhood.” Yet, there are obvious cognitive shifts going on in how we as humans relate to machines and their algorithms, and define ourselves against them. In giving an example of how digital identity and digital humanness diverge, Civic notes “AI agents will have a digital identity and may execute actions on behalf of their owners, but themselves may not have a proof of personhood.” The implication is startling: algorithms are now understood to have identities, or to possess the ability to have them. The linguistic framework for how we define ourselves is no longer the exclusive property of organic beings. ... There is a paradox in making the simple fact of being human contingent on the very machines from which we must be differentiated. In a certain respect, asking someone to justify and prove their own fundamental understanding of reality is a kind of existential gaslighting, tugging at the basic notion that the real and the digital are separate realms.


Revolutionizing Oil & Gas: How IIoT and Edge Computing are Driving Real-Time Efficiency and Cutting Costs

Maintenance is a significant expense in oil and gas operations, but IIoT and edge computing are helping companies move from reactive maintenance to predictive maintenance models. By continuously monitoring the health of equipment through IIoT sensors, companies can predict failures before they happen, reducing costly unplanned shutdowns. ... In an industry where safety is paramount, IIoT and edge computing also play a critical role in mitigating risks to both personnel and the environment. Real-time environmental monitoring, such as gas leak detection or monitoring for unsafe temperature fluctuations, can prevent accidents and minimize the impact of any potential hazards. Consider the implementation of smart sensors that monitor methane leaks at offshore rigs. By analyzing this data at the edge, systems can instantly notify operators if any leaks exceed safe thresholds. This rapid response helps prevent harmful environmental damage and potential regulatory fines while also protecting workers’ safety. ... Scaling oil and gas operations while maintaining performance is often a challenge. However, IIoT and edge computing’s ability to decentralize data processing makes it easier for companies to scale up operations without overloading their central servers. 


Gain Relief with Strategic Secret Governance

Incorporating NHI management into cybersecurity strategy provides comprehensive control over cloud security. This approach enables businesses to extensively decrease the risk of security breaches and data leaks, creating a sense of relief in our increasingly digital age. With cloud services growing rapidly, the need for effective NHIs and secrets management is more critical than ever. A study by IDC predicts that by 2025, there will be a 3-fold increase in the data volumes in the digital universe, with 49% of this data residing in the cloud. NHI management is not limited to a single industry or department. It is applicable across financial services, healthcare, travel, DevOps, and SOC teams. Any organization working in the cloud can benefit from this strategic approach. As businesses continue to digitize, NHIs and secrets management become increasingly relevant. Adapting to effectively manage these elements can bring relief to businesses from the overwhelming task of cyber threats, offering a more secure, efficient, and compliant operational environment. ... The application of NHI management is not confined to singular industries or departments. It transcends multiple sectors, including healthcare, financial services, travel industries, and SOC teams. 


Five breakthroughs that make OpenAI’s o3 a turning point for AI — and one big challenge

OpenAI’s o3 model introduces a new capability called “program synthesis,” which enables it to dynamically combine things that it learned during pre-training—specific patterns, algorithms, or methods—into new configurations. These things might include mathematical operations, code snippets, or logical procedures that the model has encountered and generalized during its extensive training on diverse datasets. Most significantly, program synthesis allows o3 to address tasks it has never directly seen in training, such as solving advanced coding challenges or tackling novel logic puzzles that require reasoning beyond rote application of learned information. ... One of the most groundbreaking features of o3 is its ability to execute its own Chains of Thought (CoTs) as tools for adaptive problem-solving. Traditionally, CoTs have been used as step-by-step reasoning frameworks to solve specific problems. OpenAI’s o3 extends this concept by leveraging CoTs as reusable building blocks, allowing the model to approach novel challenges with greater adaptability. Over time, these CoTs become structured records of problem-solving strategies, akin to how humans document and refine their learning through experience. This ability demonstrates how o3 is pushing the frontier in adaptive reasoning.


Multitenant data management with TiDB

The foundation of TiDB’s architecture is its distributed storage layer, TiKV. TiKV is a transactional key-value storage engine that shards data into small chunks, each represented as a split. Each split is replicated across multiple nodes in the cluster using the Raft consensus algorithm to ensure data redundancy and fault tolerance. The sharding and resharding processes are handled automatically by TiKV, operating independently from the application layer. This automation eliminates the operational complexity of manual sharding—a critical advantage especially in complex, multitenant environments where manual data rebalancing would be cumbersome and error-prone. ... In a multitenant environment, where a single component failure could affect numerous tenants simultaneously, high availability is critical. TiDB’s distributed architecture directly addresses this challenge by minimizing the blast radius of potential failures. If one node fails, others take over, maintaining continuous service across all tenant workloads. This is especially important for business-critical applications where uptime is non-negotiable. TiDB’s distributed storage layer ensures data redundancy and fault tolerance by automatically replicating data across multiple nodes.


Deconstructing DevSecOps

Time and again I am reminded that there is a limit to how far collaboration can take a team. This can be because either another team has a limit to how much resources it is willing to allocate, or it is incapable of contributing regardless of its resources offered. This is often the case with cyber teams that haven't restructured or adapted the training of their personnel to support DevSecOps. To often these types are policy wonks that will happily redirect you to help desk instead of assisting anyone. Another huge problem is with tooling ecosystem itself. While DevOps has an embarrassment of riches in open source tooling, DevSecOps instead has an endless number of licensing fees awaiting. Worse yet, many of these tools are only designed to common security issues in code. This is still better than nothing but it is pretty underwhelming when you are responsible for remediating the shear number of redundant (or duplicate) findings that have no bearing. Once an organization begins to implement DevSecOps it can quickly spiral. This happens when the organization is unable to determine what is acceptable risk any longer. Once this happens any rapid prototyping capability will just not be allowed at this point.


Machine identities are the next big target for attackers

“Attackers are now actively exploring cloud native infrastructure,” said Kevin Bocek, Chief Innovation Officer at Venafi, a CyberArk Company. “A massive wave of cyberattacks has now hit cloud native infrastructure, impacting most modern application environments. To make matters worse, cybercriminals are deploying AI in various ways to gain unauthorized access and exploiting machine identities using service accounts on a growing scale. The volume, variety and velocity of machine identities are becoming an attacker’s dream.” ... “There is huge potential for AI to transform our world positively, but it needs to be protected,” Bocek continues. “Whether it’s an attacker sneaking in and corrupting or even stealing a model, a cybercriminal impersonating an AI to gain unauthorized access, or some new form of attack we have not even thought of, security teams need to be on the front foot. This is why a kill switch for AI – based on the unique identity of individual models being trained, deployed and run – is more critical than ever.” ... 83% think having multiple service accounts also creates a lot of added complexity, but most (91%) agree that service accounts make it easier to ensure that policies are uniformly defined and enforced across cloud native environments.



Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle

Daily Tech Digest - December 29, 2024

AI agents may lead the next wave of cyberattacks

“Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year,” he said. “Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing.” Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot. Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image. ... “If you’re just going to fight machine learning models on the attacking side with ML models on the defensive side, you’re going to get into some bad probabilistic situations that are not going to necessarily be effective,” he said. Probabilistic security provides protections based on probabilities but assumes that absolute security can’t be guaranteed. Stytch is working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.


How businesses can ensure cloud uptime over the holidays

To ensure uptime during the holidays, best practice should include conducting pre-holiday stress tests to identify system vulnerabilities and configure autoscaling to handle demand surges. Experts also recommend simulating failures through chaos engineering to expose weaknesses. Redundancy across regions or availability zones is essential, as is a well-documented incident response plan – with clear escalation paths – “as this allows a team to address problems quickly even with reduced staffing,” says VimalRaj Sampathkumar, technical head – UKI at software company ManageEngine. It’s all about understanding the business requirements and what your demand is going to look like, says Luan Hughes, chief information officer (CIO) at tech provider Telent, as this will vary from industry to industry. “When we talk about preparedness, we talk a lot about critical incident management and what happens when big things occur, but I think you need to have an appreciation of what your triggers are,” she says. ... It’s also important to focus on your people as much as your systems, she adds, noting that it’s imperative to understand your management processes, out-of-hours and on-call rota and how you action support if problems do arise.


Tech worker movements grow as threats of RTO, AI loom

While layoffs likely remain the most extreme threat to tech workers broadly, a return-to-office (RTO) mandate can be just as jarring for remote tech workers who are either unable to comply or else unwilling to give up the better work-life balance that comes with no commute. Advocates told Ars that RTO policies have pushed workers to join movements, while limited research suggests that companies risk losing top talents by implementing RTO policies. ... Other companies mandating RTO faced similar backlash from workers, who continued to question the logic driving the decision. One February study showed that RTO mandates don't make companies any more valuable but do make workers more miserable. And last month, Brian Elliott, an executive advisor who wrote a book about the benefits of flexible teams, noted that only one in three executives thinks RTO had "even a slight positive impact on productivity." But not every company drew a hard line the way that Amazon did. For example, Dell gave workers a choice to remain remote and accept they can never be eligible for promotions, or mark themselves as hybrid. Workers who refused the RTO said they valued their free time and admitted to looking for other job opportunities.


Navigating the cloud and AI landscape with a practical approach

When it comes to AI or genAI, just like everyone else, we started with use cases that we can control. These include content generation, sentiment analysis and related areas. As we explored these use cases and gained understanding, we started to dabble in other areas. For example, we have an exciting use case for cleaning up our data that leverages genAI as well as non-generative machine learning to help us identify inaccurate product descriptions or incorrect classifications and then clean them up and regenerate accurate, standardized descriptions. ... While this might be driving internal productivity, you also must think of it this way: As a distributor, at any one time, we deal with millions of parts. Our supplier partners keep sending us their price books, spec sheets and product information every quarter. So, having a group of people trying to go through all that data to find inaccuracies is a daunting, almost impossible, task. But with AI and genAI capabilities, we can clean up any inaccuracies far more quickly than humans could. Sometimes within as little as 24 hours. That helps us improve our ability to convert and drive business through an improved experience for our customers.


When the System Fights Back: A Journey into Chaos Engineering

Enter chaos engineering — the art of deliberately creating disaster to build stronger systems. I’d read about Netflix’s Chaos Monkey, a tool designed to randomly kill servers in production, and I couldn’t help but admire the audacity. What if we could turn our system into a fighter — one that could take a punch and still come out swinging? ... Chaos engineering taught me more than I expected. It’s not just a technical exercise; it’s a mindset. It’s about questioning assumptions, confronting fears, and embracing failure as a teacher. We integrated chaos experiments into our CI/CD pipeline, turning them into regular tests. Post-mortems became celebrations of what we’d learned, rather than finger-pointing sessions. And our systems? Stronger than ever. But chaos engineering isn’t just about the tech. It’s about the culture you build around it. It’s about teaching your team to think like detectives, to dig into logs and metrics with curiosity instead of dread. It’s about laughing at the absurdity of breaking things on purpose and marveling at how much you learn when you do. So here’s my challenge to you: embrace the chaos. Whether you’re running a small app or a massive platform, the principles hold true. 


Enhancing Your Company’s DevEx With CI/CD Strategies

CI/CD pipelines are key to an engineering organization’s efficiency, used by up to 75% of software companies with developers interacting with them daily. However, these CI/CD pipelines are often far from being the ideal tool to work with. A recent survey found that only 14% of practitioners go from code to production in less than a day when high-performing teams should be able to deploy multiple times a day. ... Merging, building, deploying and running are all classic steps of a CI/CD pipeline, often handled by multiple tools. Some organizations have SREs that handle these functions, but not all developers are that lucky! In that case, if a developer wants to push code where a pipeline isn’t set up — which can be quite recurring with the rise of microservices — they must assemble those rarely-used tools. However, this will disturb the flow state you wish your developers to remain in. ... Troubleshooting issues within a CI/CD pipeline can be challenging for developers due to the need for more visibility and information. These processes often operate as black boxes, running on servers that developers may not have direct access to with software that is foreign to developers. Consequently, developers frequently rely on DevOps engineers — often understaffed — to diagnose problems, leading to slow feedback loops.


How to Architect Software for a Greener Future

Code efficiency is something that the platforms and the languages should make easy for us. They should do the work, because that's their area of expertise, and we should just write code. Yes, of course, write efficient code, but it's not a silver bullet. What about data center efficiency, then? Surely, if we just made our data center hyper efficient, we wouldn't have to worry. We could just leave this problem to someone else. ... It requires you to do some thinking. It also requires you to orchestrate this in some type of way. One way to do this is autoscaling. Let's talk about autoscaling. We have the same chart here but we have added demand. Autoscaling is the simple concept that when you have more demand, you use more resources and you have a bigger box, virtual machine, for example. The key here is very easy to do the first thing. We like to do this, "I think demand is going to go up, provision more, have more space. Yes, I feel safe. I feel secure now". Going the other way is a little scarier. It's actually just as important as compared to sustainability. Otherwise, we end up in the first scenario where we are incorrectly sized for our resource use. Of course, this is a good tool to use if you have a variability in demand. 


Tech Trends 2025 shines a light on the automation paradox – R&D World

The surge in AI workloads has prompted enterprises to invest in powerful GPUs and next-generation chips, reinventing data centers as strategic resources. ... As organizations race to tap progressively more sophisticated AI systems, hardware decisions once again become integral to resilience, efficiency and growth, while leading to more capable “edge” deployments closer to humans and not just machines. As Tech Trends 2025 noted, “personal computers embedded with AI chips are poised to supercharge knowledge workers by providing access to offline AI models while future-proofing technology infrastructure, reducing cloud computing costs, and enhancing data privacy.” ... Data is the bedrock of effective AI, which is why “bad inputs lead to worse outputs—in other words, garbage in, garbage squared,” as Deloitte’s 2024 State of Generative AI in the Enterprise Q3 report observes. Fully 75% of surveyed organizations have stepped up data-life-cycle investments because of AI. Layer a well-designed data framework beneath AI, and you might see near-magic; rely on half-baked or biased data, and you risk chaos. As a case in point, Vancouver-based LIFT Impact Partners fine-tuned its AI assistants on focused, domain-specific data to help Canadian immigrants process paperwork—a far cry from scraping the open internet and hoping for the best.


What Happens to Relicensed Open Source Projects and Their Forks?

Several companies have relicensed their open source projects in the past few years, so the CHAOSS project decided to look at how an open source project’s organizational dynamics evolve after relicensing, both within the original project and its fork. Our research compares and contrasts data from three case studies of projects that were forked after relicensing: Elasticsearch with fork OpenSearch, Redis with fork Valkey, and Terraform with fork OpenTofu. These relicensed projects and their forks represent three scenarios that shed light on this topic in slightly different ways. ... OpenSearch was forked from Elasticsearch on April 12, 2021, under the Apache 2.0 license, by the Amazon Web Services (AWS) team so that it could continue to offer this service to its customers. OpenSearch was owned by Amazon until September 16, 2024, when it transferred the project to the Linux Foundation. ... OpenTofu was forked from Terraform on Aug. 25, 2023, by a group of users as a Linux Foundation project under the MPL 2.0. These users were starting from scratch with the codebase since no contributors to the OpenTofu repository had previously contributed to Terraform.


Setting up a Security Operations Center (SOC) for Small Businesses

In today's digital age, security is not an option for any business irrespective of its size. Small Businesses equally face increasing cyber threats, making it essential to have robust security measures in place. A SOC is a dedicated team responsible for monitoring, detecting, and responding to cybersecurity incidents in real-time. It acts as the frontline defense against cyber threats, helping to safeguard your business's data, reputation, and operations. By establishing a SOC, you can proactively address security risks and enhance your overall cybersecurity posture. The cost of setting up a SOC for a small business may be prohibitive, in which case, the businesses may look at engaging Managed Service Providers for the whole or part of the services. ... Establishing clear, well-defined processes is vital for the smooth functioning of your SOC. NIST Cyber Security Framework could be a good fit for all businesses and one can define the processes that are essential and relevant considering the size, threat landscape and risk tolerance of the business. ... Continuous training and development are essential for keeping your SOC team prepared to handle evolving threats. Offer regular training sessions, certifications, and workshops to enhance their skills and knowledge. 



Quote for the day:

"Hardships often prepare ordinary people for an extraordinary destiny." -- C.S. Lewis