Daily Tech Digest - December 25, 2024

The promise and perils of synthetic data

Synthetic data is no panacea, however. It suffers from the same “garbage in, garbage out” problem as all AI. Models create synthetic data, and if the data used to train these models has biases and limitations, their outputs will be similarly tainted. For instance, groups poorly represented in the base data will be so in the synthetic data. “The problem is, you can only do so much,” Keyes said. “Say you only have 30 Black people in a dataset. Extrapolating out might help, but if those 30 people are all middle-class, or all light-skinned, that’s what the ‘representative’ data will all look like.” To this point, a 2023 study by researchers at Rice University and Stanford found that over-reliance on synthetic data during training can create models whose “quality or diversity progressively decrease.” Sampling bias — poor representation of the real world — causes a model’s diversity to worsen after a few generations of training, according to the researchers. Keyes sees additional risks in complex models such as OpenAI’s o1, which he thinks could produce harder-to-spot hallucinations in their synthetic data. These, in turn, could reduce the accuracy of models trained on the data — especially if the hallucinations’ sources aren’t easy to identify.


Federal Privacy Is Inevitable in The US (Prepare Now)

The writing’s on the wall for federal privacy. It’s simply not tenable for almost half the states having varying privacy thresholds and the other half with nothing. Our interconnected business and digital ecosystems need certainty and consistency across the country. Congress can and should stand up for American privacy. The good news? Recent history shows that sweeping reforms are possible. From the CHIPS and Science Act to major pandemic stimulus, lawmakers have shown their ability to meet moments with big regulations. While states deserve credit for filling the privacy void, federal action must follow. For now, there’s no time to waste. Enterprises that build privacy-ready operations today will be better positioned to thrive under future regulations, maintain customer trust, and turn compliance into a competitive advantage. On the other hand, slow-to-move companies risk regulatory penalties and loss of customer confidence in an increasingly privacy-conscious marketplace. Future-forward organizations recognize that investing in privacy isn’t just about compliance; it’s about building a sustainable competitive advantage in the data-driven economy. The choice is clear: invest in privacy now or play catch-up when federal mandates arrive.


AI use cases are going to get even bigger in 2025

Few sectors stand to gain more from AI advancements than defense. “We are witnessing a surge in applications like autonomous drone swarms, electronic spectrum awareness, and real-time battlefield space management, where AI, edge computing, and sensor technologies are integrated to enable faster responses and enhanced precision,” says Meir Friedland, CEO at RF spectrum intelligence company Sensorz. ... “AI is transforming genome sequencing, enabling faster and more accurate analyses of genetic data,” Khalfan Belhoul, CEO at the Dubai Future Foundation, tells Fast Company. “Already, the largest genome banks in the U.K. and the UAE each have over half a million samples, but soon, one genome bank will surpass this with a million samples.” But what does this mean? “It means we are entering an era where healthcare can truly become personalized, where we can anticipate and prevent certain diseases before they even develop,” Belhoul says. ... The potential for AI extends far beyond the use cases dominating today’s headlines. As Friedland notes, “AI’s future lies in multi-domain coordination, edge computing, and autonomous systems.” These advancements are already reshaping industries like manufacturing, agriculture, and finance.


2025 Will Be the Year That AI Agents Transform Crypto

The value of AI agents lies not just in their utility but in their potential to scale human capabilities. Agents are no longer just tools — they are emerging as participants in the on-chain economy, driving innovation across finance, gaming and decentralized social platforms. With protocols such as Virtuals and open-source frameworks like ELIZA, it’s becoming increasingly simple for developers to build, deploy and iterate AI agents that serve an increasingly diverse set of use cases. ... Unlike the core foundational AI models that are developed behind the walled gardens of OpenAI and Anthropic, AI agents are being innovated in the trenches of the crypto world. And for good reason. Blockchains provide the ideal infrastructure as they offer permissionless and frictionless financial rails, enabling agents to seed wallets, transact and send funds autonomously — tasks that would be unfeasible using traditional financial systems. In addition, the open-source nature of crypto allows developers to leverage existing frameworks to launch and iterate on agents faster than ever before. With more no-code platforms like Top Hat gaining traction, it’s only getting easier for anyone to be able to launch an agent in minutes. 


Unpacking OpenAI's Latest Approach to Make AI Safer

OpenAI said it used an internal reasoning model to generate synthetic examples of chain-of-thought responses, each referencing specific elements of the company's safety policy. Another model, referred to as the "judge," evaluated these examples to meet quality standards. The approach looks to address the challenges of scalability and consistency, OpenAI said. Human-labeled datasets are labor-intensive and prone to variability, but properly vetted synthetic data can theoretically offer a scalable solution with uniform quality. The method can potentially optimize training and reduce the latency and computational overhead associated with the models reading lengthy safety documents during inference. OpenAI acknowledged that aligning AI models with human safety values remains a challenge. Users continue to develop jailbreak techniques to bypass safety restrictions, such as framing malicious requests in deceptive or emotionally charged contexts. The o3 series models scored better than its peers Gemini 1.5 Flash, GPT-4o and Claude 3.5 Sonnet on the Pareto benchmark, which measures a model's ability to resist common jailbreak strategies. But the results may be of little consequence, as adversarial attacks evolve alongside improvements in model defenses.


The yellow brick road to agentic AI

Many believe this AI era is the most profound we’ve ever seen in tech. We agree and liken it to mobile’s role in driving on-premises workloads to the cloud and disrupting information technology. But we see this as even more impactful. But for AI agents to work we have to reinvent the software stack and break down 50 years of silo building. The emergence of data lakehouses is not the answer as they are just a bigger siloed asset. Rather, software as a service as we know it will be reimagined. Two prominent chief executives agree. At Amazon Web Services Inc.’s recent AWS re:Invent conference, we sat down with Amazon.com Inc. CEO Andy Jassy. ... There is a clear business imperative behind this shift. We believe companies will differentiate themselves by aligning end-to-end operations with a unified set of plans — from three-year strategic assumptions about demand to real-time, minute-by-minute decisions, such as how to pick, pack and ship individual orders to meet long-term goals. The function of management has always involved planning and resource allocation across various timescales and geographies, but previously there was no software capable of executing on these plans seamlessly across every time horizon.


The AI backlash couldn’t have come at a better time

Developers, engineers, operations personnel, enterprise architects, IT managers, and others need AI to be as boring for them as it has become for consumers. They need it not to be a “thing,” but rather something that is managed and integrated seamlessly into — and supported by — the infrastructure stack and the tools they use to do their jobs. They don’t want to endlessly hear about AI; they just want AI to seamlessly work for them so it just works for customers. ... The models themselves are also, rightly, growing more mainstream. A year ago they were anything but, with talk of potentially gazillions of parameters and fears about the legal, privacy, financial, and even environmental challenges such a data abyss would create. Those LLLMs are still out there, and still growing, but many organizations are looking for their models to be far less extreme. They don’t need (or want) a model that includes everything anyone ever learned about anything; rather, they need models that are fine-tuned with data that is relevant to the business, that don’t necessarily require state-of-the-art GPUs, and that promote transparency and trust. As Matt Hicks, CEO of Red Hat, put it, “Small models unlock adoption.”


Systems Thinking in Leading Transformation for the Future

The first step is aligning your internal goals with your external insights. Leaders must articulate a clear vision that ties the organization's purpose to broader societal and industry trends. For Nooyi and PepsiCo, that meant “starting from the outside.” Nooyi tasked her senior leaders with identifying external factors that would likely impact the company. She said, “They pointed to several megatrends … including a preoccupation with health and wellness, scarcity of water and other natural resources, constraints created by global climate change … and a talent market characterized by shortages of key people.” ... Systems thinking involves understanding the interdependencies within and outside an organization. For example, if you are embarking on any transformation project, you’ll likely need to explore new partnerships with suppliers and regional authorities and regulators. ... Using frameworks like OKRs (Objectives and Key Results), you can evaluate how each initiative within your transformation program contributes to the overarching objective. For example, a laudable main aim such as a commitment to environmental sustainability would likely involve numerous associated projects: for example, water conservation, waste reduction, and reduced carbon footprint.


The 2024 cyberwar playbook: Tricks used by nation-state actors

While nation-state actors loved zero days for swift break-ins, phishing remained a sly plan B. It let them craft sneaky schemes to worm into systems, proving that 2024 was the year of both bold strikes and artful cons. Russian nation-state actors leaned heavily on phishing in 2024, with other APTs, like Iranian and Pakistani groups, dabbling in the tactic as well. The following are some of the standout campaigns from 2024 where phishing was the go-to for initial access. ... While credential harvesting through malware delivered via phishing was fairly common, nation-state actors rarely resorted to scavenging credentials from hack forums or drop sites as a primary tactic. When asked, Hughes noted, “I’m not familiar with this being the primary MO by the APTs, who instead are targeting devices, products and vendors with vulnerabilities and misconfigurations, but once inside, they do compromise credentials and use those to pivot, move laterally, persist in environments and more.” ... These actors weren’t always about flashy, custom malware. Quite often, they used legit tools like PowerShell, rootkits, RDP, and other off-the-shelf system features to sneak in, stay undetected, and set up long-term access. This made their attacks stealthy, persistent, and ready for future moves. 


Generative AI is now a must-have tool for technology professionals

As part of this trend, "we are witnessing developers shift from writing code to orchestrating AI agents," said Jithin Bhasker, general manager and vice president at ServiceNow. The efficiency gained from gen AI adoption by technologists isn't just about personal productivity, it's urgent "with the projected shortage of half a million developers by 2030 and the need for a billion new apps," he added. ... Still, as gen AI becomes a commonplace tool in technology shops, Berent-Spillson advises caution. "The real game-changer here is speed, but there's a catch," he said. "While AI can dramatically compress cycle time, it will also amplify any existing process constraints. Think of it like adding a supercharger to your car -- if your chassis isn't solid, you're just going to get to the problem faster." Exercise caution "regarding code quality, maintainability, and IP considerations," McDonagh-Smith advises. "While syntactically correct, AI tools have been seen to create code that's logically flawed or inefficient, leading to potential code degradation over time if not reviewed carefully. We should also guard against software sprawl where the ease of creating AI-generated code results in overly complex or unnecessary code that might make projects more difficult to maintain over time."



Quote for the day:

"Difficulties in life are intended to make us better, not bitter." -- Dan Reeves

Daily Tech Digest - December 24, 2024

Concerns over the security of electronic personal health information intensifies

When entities outside HIPAA’s purview experience breaches, the Federal Trade Commission (FTC) Health Breach Notification Rule applies. However, this dual system creates confusion among stakeholders, who must navigate overlapping jurisdictions. The lack of a unified, comprehensive framework exacerbates the problem, leaving patients uncertain about the security of their health data. Another pressing concern is the cybersecurity of medical devices. Many modern medical devices connect to networks or the internet, increasing their susceptibility to cyberattacks. Hospitals often operate thousands of interconnected devices, making it challenging to monitor and secure every endpoint. Insecure devices not only endanger patient privacy but also jeopardize care delivery. For instance, a compromised infusion pump or defibrillator could have life-threatening consequences. The Food and Drug Administration (FDA) has taken steps to address these vulnerabilities through premarket and post-market cybersecurity guidelines. However, the onus of ensuring device security often falls into a gray area between manufacturers and healthcare providers. 


The rise of “soft” skills: How GenAI is reshaping developer roles

The successful developer in this evolving landscape will be one who can effectively combine technical expertise with strong interpersonal skills. This includes not only the ability to work with AI tools but also the capability to collaborate with both technical and non-technical stakeholders. After all, with less of a need for coders to do the low-level, routine work of software development, more emphasis will be placed on coders’ ability to collaborate with business managers to understand their goals and create technology solutions that will advance them. Additionally, the coding that they’ll be doing will be more complex and high-level, often requiring work with other developers to determine the best way forward. The emphasis on soft skills—including adaptability, communication, and collaboration—has become as crucial as technical proficiency. As the software development field continues to evolve, it’s clear that the future belongs to those who embrace AI as a powerful complement to their skills rather than viewing it as a threat. The coding profession isn’t disappearing—it’s transforming into a role that demands a more comprehensive skill set, combining technical mastery with strong interpersonal capabilities.


Top 10 Cybersecurity Trends to Expect in 2025

Zero-day vulnerabilities are still one of the major threats in cybersecurity. By definition, these faults remain unknown to software vendors and the larger security community, thus leaving systems exposed until a fix can be developed. Attackers are using zero-day exploits frequently and effectively, affecting even major companies, hence the need for proactive measures. Advanced threat actors use zero-day attacks to achieve goals including espionage and financial crimes. ... Integrating regional and local data privacy regulations such as GDPR and CCPA into the cybersecurity strategy is no longer optional. Companies need to look out for regulations that will become legally binding for the first time in 2025, such as the EU's AI Act. In 2025, regulators will continue to impose stricter guidelines related to data encryption and incident reporting, including in the realm of AI, showing rising concerns about online data misuse. Decentralized security models, such as blockchain, are being considered by some companies to reduce single points of failure. Such systems offer enhanced transparency to users and allow them much more control over their data. ... Verifying user identities has become more challenging as browsers enforce stricter privacy controls and attackers develop more sophisticated bots. 


Navigating AI in Aviation: A Roadmap for Risk and Security Management Professionals

The Roadmap for Artificial Intelligence Safety Assurance, recently published by FAA, recognizes the potential of AI on aviation and emphasizes the need for safety assurance, industry collaboration and incremental implementation. This roadmap, combined with other international frameworks, offers a global framework for managing AI risks in aviation. ... While AI demonstrates the potential for enhanced operational efficiency, predictive maintenance and even autonomous flight, these benefits come with significant security and compliance risks. ... Differentiating between learned AI (static) and learning AI (adaptive) poses a significant challenge in AI risk management. The FAA roadmap calls for continuous monitoring and assurance, especially for learning AI, echoing the need for dynamic risk assessment protocols like those recommended in NIST-AI-600-1 for managing generative AI models. ... Incorporating AI in aviation is far from straightforward, and due to human safety concerns, it involves navigating a constantly evolving landscape of risks and at times overbearing regulatory requirements. For risk and security professionals, the key task is to align AI technologies with operational safety and evolving regulatory requirements.


The Urgent Need for Data Minimization Standards

On one side of the spectrum is the redaction of direct identifiers such as names, or payment card information such as credit card numbers. On the other side of the spectrum lies anonymization, where re-identification of individuals is extremely unlikely. Within the spectrum, we also find pseudonymization, which, depending on the jurisdiction, often means something like reversible de-identification Many organizations are keen to anonymize their data because, if anonymization is achieved, the data falls outside of the scope of data protection laws as they are no longer considered personal information. ... We hold that the claim that data anonymization is impossible is based on a lack of clarity around what is required for anonymization, with organizations often either wittingly or unwittingly misusing the term for what is actually a redaction of direct identifiers. Furthermore, another common claim is that data minimization is in irresolvable tension with the use of data at a large scale in the machine learning context. This claim is not only based on a lack of clarity around data minimization but also a lack of understanding around the extremely valuable data that often surrounds identifiable information, such as data about products, conversation flows, document topics, and more.


How CISOs can make smarter risk decisions

Bot detection works by recognizing markers of bad bots, including requests originating from malicious domains and patterns of behavior exhibited. Establishing a baseline of normal human web activity and recognizing anomalous behavior from incoming traffic is at the core of effective bot detection.  ... Unsurprisingly, for businesses focused on managing users’ money, account takeover and carding attacks are common in the financial industry. In these instances, cybercriminals try to break into accounts and steal information from the payments page. As such, the financial industry has been an early adopter of cybersecurity protocols and tools to ensure a fully comprehensive and well-funded security program, while the travel and hospitality industries have not yet made that pivot in the same way. ... A good CISO makes balanced risk decisions. A bad CISO gets in the way of helping the company innovate. The combination of industry best practices and regulation forcing the adoption of robust security tooling and methodology pushes companies to create a strong baseline to build in effective protections. However, CISOs must evaluate carefully what assets they choose to put maximum security measures behind. If you argue that everything needs that high level of security, you become the CISO who cried wolf


Developers Are Key to Stopping Rising API Security Threat

Developers and security teams typically share responsibility for ensuring APIs are secure. “While the security team is ultimately responsible for the overall security posture of an organization, developers play a key role in building and managing secure APIs,” Whaley said. “They need to write secure code and implement security measures during the development phase, such as input validation, authentication, encryption and access control.” The security team defines and enforces security policies, he said. They’re also responsible for establishing governance frameworks and managing tools to monitor, detect and respond to threats. ... Developers also play an important role in remediating API security problems, he said. Their job is to implement fixes and ensure that vulnerabilities are properly addressed. emediating an incident can include fixing vulnerabilities, deploying patches and addressing any misconfigurations. But it can also sometimes mean hiring external help in the form of security consultants, investing in new security tools and covering any legal and compliance fees, he said. “Additionally, there are intangible factors to consider, like damage to brand reputation and loss of customer confidence, which can have a big impact even if they are harder to quantify,” Whaley added.


Companies Race to Use AI Security Against AI-Driven Threats

First, securing AI by design is crucial, as our customers increasingly rely on AI in their ecosystems. As a cybersecurity solution provider, our objective is to ensure our customers are protected when using new technologies. The second vector involves combating adversaries who use AI to launch attacks. The rate of these attacks is exponentially faster and more sophisticated than ever before. To counter this, we must utilize AI to protect against AI-driven attacks. The third vector focuses on how AI can benefit security practitioners. By simplifying complex data analysis and enhancing product interactions, AI can significantly improve the efficiency and effectiveness of security operations. Solutions such as AI Access Security, which provides visibility into AI usage within enterprises and ensures secure AI applications have seen development at 100 customers already benefiting from our AI security solutions, we see a clear shift in maturity levels. ... Autonomous SOCs are becoming a reality, driven by two key factors. First, adversaries are evolving at a pace that outstrips our ability to scale human resources. Second, there's a shortage of qualified cybersecurity talent. These dual pressures on both supply and demand - necessitate technological intervention. 


Overcoming modern observability challenges

Observability is crucial for quickly detecting issues and taking corrective actions to ensure that application performance does not negatively impact customer experience. With millions of transactions occurring every second, relying on traditional logic, predefined rules, and human intervention is no longer sufficient. According to a 2023 Gartner report, applied observability has emerged as one of the top 10 strategic technology trends, underscoring the increasing need for using AI to make smarter, more automated solutions to stay competitive​ and optimize business operations in real time. Today’s observability solutions must go beyond static monitoring by incorporating AI and machine learning to detect patterns, trends, and anomalies. By automatically identifying outliers and emerging issues, AI-driven systems reduce the mean time to detect (MTTD) and mean time to resolve (MTTR), driving efficiency and helping teams address potential problems before they affect end-users. ... Organizations need an observability solution that is comprehensive, cost-effective, and intelligent. The Kloudfuse observability platform is designed to monitor modern cloud-native workloads while optimizing costs, offering insights into model performance and mitigating risks. 


Managing Software Engineering Teams of Artificial Intelligence Developers

Regardless of its industry, every organization has an AI solution, is working on AI integration, or has a plan for it in its roadmap. While developers are being trained in the various technological skills needed for development, senior leadership must focus on strategies to integrate and align these efforts with the broader organization. ... Investing in AI alone will not guarantee success for the company. Avoid making investment decisions solely based on the Fear of Missing Out. For the business to thrive in the long run, it must focus on value creation through AI integration. Follow standard processes and conduct thorough due diligence to identify where AI can effectively drive value for your product. Collaborate closely with the product, business, and engineering teams to define the scope of work and develop a strategic vision that ensures alignment within the team. It is also crucial to achieve stakeholder alignment, especially given the complexity of the projects, while setting realistic expectations. ... As an engineering leader, invest in the right skills required for the project. Empower the team to make the best decisions. Building strong expertise in the teams and providing learning opportunities for the team by allowing them to attend learning sessions, conferences, hackathons, etc.



Quote for the day:

“It's failure that gives you the proper perspective on success.” -- Ellen DeGeneres

Daily Tech Digest - December 23, 2024

‘Orgs need to be ready’: AI risks and rewards for cybersecurity in 2025

“In 2025, we expect to see more AI-driven cyberthreats designed to evade detection, including more advanced evasion techniques bypassing endpoint detection and response (EDR), known as EDR killers, and traditional defences,” Khalid argues. “Attackers may use legitimate applications like PowerShell and remote access tools to deploy ransomware, making detection harder for standard security solutions.” On a more frightening note, Michael Adjei, director of systems engineering at Illumio, believes that AI will offer somewhat of a field day for social engineers, who will trick people into actually creating breaches themselves: “Ordinary users will, in effect, become unwitting participants in mass attacks in 2025. ... “With greater adoption of AI will come increased cyberthreats, and security teams need to remain nimble, confident and knowledgeable.” Similarly, Britton argues that teams “will need to undergo a dedicated effort around understanding how [AI] can deliver results”. “To do this, businesses should start by identifying which parts of their workflows are highly manual, which can help them determine how AI can be overlaid to improve efficiency. Key to this will be determining what success looks like. Is it better efficiency? Reduced cost?”


Will we ever trust robots?

The chief argument for robots with human characteristics is a functional one: Our homes and workplaces were built by and for humans, so a robot with a humanlike form will navigate them more easily. But Hoffman believes there’s another reason: “Through this kind of humanoid design, we are selling a story about this robot that it is in some way equivalent to us or to the things that we can do.” In other words, build a robot that looks like a human, and people will assume it’s as capable as one. In designing Alfie’s physical appearance, Prosper has borrowed some aspects of typical humanoid design but rejected others. Alfie has wheels instead of legs, for example, as bipedal robots are currently less stable in home environments, but he does have arms and a head. The robot will be built on a vertical column that resembles a torso; his specific height and weight are not yet public. He will have two emergency stop buttons. Nothing about Alfie’s design will attempt to obscure the fact that he is a robot, Lewis says. “The antithesis [of trustworthiness] would be designing a robot that’s intended to emulate a human … and its measure of success is based on how well it has deceived you,” he told me. “Like, ‘Wow, I was talking to that thing for five minutes and I didn’t realize it’s a robot.’ That, to me, is dishonest.”


My Personal Reflection on DevOps in 2024 and Looking Ahead to 2025

As we move into 2025, the big stories that dominated 2024 will continue to evolve. We can expect AI—particularly generative AI—to become even more deeply ingrained in the DevOps toolchain. Prompt engineering for AI models will likely emerge as a specialized skill, just as writing Docker files was a skill set that distinguished DevOps engineers a decade ago. Agentic AI will become the norm with teams of agents taking on the tasks that lower level workers once performed. On the policy side, escalating regulatory demands will push enterprises to adopt more stringent compliance frameworks, integrating AI-driven compliance-as-code tools into their pipelines. Platform engineering will mature, focusing on standardization and the creation of “golden paths” that offer best practices out of the box. We may also see a consolidation of DevOps tool vendors as the market seeks integrated, end-to-end platforms over patchwork solutions. The focus will be on usability, quality, security and efficiency—attributes that can only be realized through cohesive ecosystems rather than fragmented toolchains. Sustainability will also factor into 2025’s narrative. As environmental concerns shape global economic policies and public sentiment, DevOps teams will take resource optimization more seriously. 


From Invisible UX to AI Governance: Kanchan Ray, CTO, Nagarro Shares his Vision for a Connected Future

Vision and data derived from videos have become integral to numerous industries, with machine vision playing a crucial role in automating business processes. For instance, automatic inventory management, often supported by robots, is transitioning from experimental to mainstream. Machine vision also enhances security and safety by replacing human monitoring with machines that operate around the clock, offering greater accuracy at a lower cost. On the consumer front, virtual try-ons and AI-assisted mirrors have become standard features in reputable retail outlets, both in physical stores and online platforms. ... Traditional boundaries of security, which once focused on standard data security, governance, and IT protocols, are now fluid and dynamic. The integration of AI, data analytics, and machine learning has created diverse contexts for output consumption, resulting in new business operations around model simulations and decision-making related to model pipelines. These operations include processes like model publishing, hyperparameter observability, and auditing model reasoning, all of which push the boundaries of AI responsibility.


If your AI-generated code becomes faulty, who faces the most liability exposure?

None of the lawyers, though, discussed who is at fault if the code generated by an AI results in some catastrophic outcome. For example: The company delivering a product shares some responsibility for, say, choosing a library that has known deficiencies. If a product ships using a library that has known exploits and that product causes an incident that results in tangible harm, who owns that failure? The product maker, the library coder, or the company that chose the product? Usually, it's all three. ... Now add AI code into the mix. Clearly, most of the responsibility falls on the shoulders of the coder who chooses to use code generated by an AI. After all, it's common knowledge that the code may not work and needs to be thoroughly tested. In a comprehensive lawsuit, will claimants also go after the companies that produce the AIs and even the organizations from which content was taken to train those AIs (even if done without permission)? As every attorney has told me, there is very little case law thus far. We won't really know the answers until something goes wrong, parties wind up in court, and it's adjudicated thoroughly. We're in uncharted waters here. 


5 Signs You’ve Built a Secretly Bad Architecture (And How to Fix It)

Dependencies are the hidden traps of software architecture. When your system is littered with them — whether they’re external libraries, tightly coupled modules, or interdependent microservices — it creates a tangled web that’s hard to navigate. They make the system difficult to debug locally. Every change risks breaking something else. Deployments take more time, troubleshooting takes longer, and cascading failures are a real threat. The result? Your team spends more time toiling and less time innovating. ... Reducing dependencies doesn’t mean eliminating them entirely or splitting your system into nanoservices. Overcorrecting by creating tiny, hyper-granular services might seem like a solution, but it often leads to even greater complexity. In this scenario, you’ll find yourself managing dozens — or even hundreds — of moving parts, each requiring its own maintenance, monitoring, and communication overhead. Instead, aim for balance. Establish boundaries for your microservices that promote cohesion, avoiding unnecessary fragmentation. Strive for an architecture where services interact efficiently but aren’t overly reliant on each other, which increases the flexibility and resilience of your system.


The 4 key aspects of a successful data strategy

Without a data strategy to structure various efforts, the value added from data in any organization of a certain size or complexity falls far short of the possibilities. In such cases, data is only used locally or aggregated along relatively rigid paths. The result? The company’s agility in terms of necessary changes remains inhibited. In the absence of such a strategy, technical concepts and architectures can hardly increase this value either. A well-thought-out data strategy can be formulated in various ways. It encompasses several different facets, such as availability, searchability, security, protection of personal data, cost control, etc. However, four key aspects that form the basis for a data strategy can be identified from a variety of data-related projects: identity, bitemporality, networking and federalism. ... A data strategy also determines how companies encode the knowledge about their products, services, processes and business models. This makes solutions possible that also allow for automated decision support. To sell glasses online, a lot of specialized optician knowledge must be encoded so that the customer does not make serious mistakes when configuring their glasses. The optimal size of the progressive lenses depends, among other things, on the visual acuity and the lens geometry. 


Maximizing the impact of cybercrime intelligence on business resilience

An intelligence capability is only as effective as its coverage of the adversary. A robust program ensures historical coverage for context, near-real-time coverage for timely responses to immediate threats, and depth of coverage for sufficient understanding. Cybercrime intelligence coverage encompasses both human and technical data. Valuable sources of information include any platforms where cybercriminals gather to communicate, coordinate, or trade, such as social networks, chatrooms, forums and direct one-on-one interactions. Technical coverage requires visibility into the tools used by adversaries. This coverage can be obtained through programmatic malware emulation across the full spectrum of malware families deployed by cybercriminals, ensuring comprehensive insights into their activities in a timely and ongoing manner. ... Adversary Intelligence is produced from a focused collection, analysis and exploitation capability and curated from where threat actors collaborate, communicate and plan cyber attacks. Obtaining and utilizing this Intelligence provides proactive and groundbreaking insights into the methodology of top-tier cybercriminals – target selection, assets and tools used, associates and other enablers that support them.


Large language overkill: How SLMs can beat their bigger, resource-intensive cousins

LLMs are incredibly powerful, yet they are also known for sometimes “losing the plot,” or offering outputs that veer off course due to their generalist training and massive data sets. That tendency is made more problematic by the fact that OpenAI’s ChatGPT and other LLMs are essentially “black boxes” that don’t reveal how they arrive at an answer. This black box problem is going to become a bigger issue going forward, particularly for companies and business-critical applications where accuracy, consistency and compliance are paramount. ... Fortunately, SLMs are better suited to address many of the limitations of LLMs. Rather than being designed for general-purpose tasks, SLMs are developed with a narrower focus and trained on domain-specific data. This specificity allows them to handle nuanced language requirements in areas where precision is paramount. Rather than relying on vast, heterogeneous datasets, SLMs are trained on targeted information, giving them the contextual intelligence to deliver more consistent, predictable and relevant responses. This offers several advantages. First, they are more explainable, making it easier to understand the source and rationale behind their outputs. This is critical in regulated industries where decisions need to be traced back to a source.


Beware Of Shadow AI – Shadow IT’s Less Well-Known Brother

Even though AI brings great productivity, Shadow AI introduces different risks ... Studies show employees are frequently sharing legal documents, HR data, source code, financial statements and other sensitive information with public AI applications. AI tools can inadvertently expose this sensitive data to the public, leading to data breaches, reputational damage and privacy concerns. ... Feeding data into public platforms means that organizations have very little control over how their data is managed, stored or shared, with little knowledge of who has access to this data and how it will be used in the future. This can result in non-compliance with industry and privacy regulations, potentially leading to fines and legal complications. ... Third-party AI tools could have built-in vulnerabilities that a threat actor could exploit to gain access to the network. These tools can lack security standards compared to an organization’s internal security systems. Shadow AI can also introduce new attack vectors making it easier for malicious actors to exploit weaknesses. ... Without proper governance or oversight, AI models can spit out biased, incomplete or flawed outputs. Such biased and inaccurate results can bring harm to organizations. 



Quote for the day:

“Success is most often achieved by those who don't know that failure is inevitable.” -- Coco Chanel

Daily Tech Digest - December 22, 2024

3 Steps To Include AI In Your Future Strategic Plans

AI is complex and multifaceted, so adopting it is not as simple as replacing legacy systems with new technology. Leaders would need to dig deeper to uncover barriers and opportunities. This can involve inviting external experts to discuss AI's benefits and challenges, hosting workshops where team members can explore different case studies, or creating internal discussion groups focused on various aspects of AI technology and potential barriers to adoption. ... A strong strategic plan should clearly link prospective investments to the organization's purpose and mission. For example, if customer centricity is central to the mission, any investment in new technology should directly connect to improving customer outcomes. ... A strategy plan should not only outline planned AI initiatives but also provide a clear roadmap for implementation. Given that AI is still evolving, it's crucial not to create a roadmap in isolation from ever-changing business challenges, market dynamics, or technological advancements. ... In this context, an AI strategy roadmap should be emergent— meaning it should be grounded in key strategic intentions while also being flexible enough to adapt to unforeseen events or black swan occurrences that necessitate rethinking and adjustments.


Can Pure Scrum Actually Work?

“Pure Scrum,” described in the Scrum Guide, is an idiosyncratic framework that helps create customer value in a complex environment. However, five main issues are challenging its general corporate application:Pure Scrum focuses on delivery: How can we avoid running in the wrong direction by building things that do not solve our customers’ problems? Pure Scrum ignores product discovery in particular and product management in general. If you think of the Double Diamond, to use a popular picture, Scrum is focused on the right side; see above. Pure Scrum is designed around one team focused on supporting one product or service. Pure Scrum does not address portfolio management. It is not designed to align and manage multiple product initiatives or projects to achieve strategic business objectives. Pure Scrum is based on far-reaching team autonomy: The Product Owner decides what to build, the Developers decide how to build it, and the Scrum team self-manages. ... At its core, pure Scrum is less a project management framework and more a reflection of an organization’s fundamental approach to creating value. It requires a profound shift from seeing work as a series of prescribed steps to viewing it as a continuous journey of discovery and adaptation. 


The Rise of Agentic AI: How Hyper-Automation is Reshaping Cybersecurity and the Workforce

As AI advances, concerns about job displacement grow louder. For years, organizations have reassured employees that AI will “enhance, not replace” human roles. Smith offered a more nuanced perspective: “AI will replace tasks, not people—at least in the near term. Human oversight remains critical because we still don’t fully understand AI behavior.” In cybersecurity, AI acts as a force multiplier, streamlining tedious tasks like data analysis and incident documentation while enabling humans to focus on strategic decisions. This collaboration allows professionals to do more with less, amplifying productivity without eliminating the need for human expertise. However, Smith acknowledged long-term challenges. ... The rise of agentic AI marks a transformative moment for cybersecurity and the workforce. As organizations move beyond static workflows and embrace dynamic, autonomous systems, they gain the ability to respond to threats faster and more efficiently than ever before. However, this evolution demands a strategic approach—one that balances automation with human oversight, strengthens defenses against AI-driven attacks, and prepares for the societal shifts AI will bring.


If ChatGPT produces AI-generated code for your app, who does it really belong to?

From a contractual point of view, Santalesa contends that most companies producing AI-generated code will, "as with all of their other IP, deem their provided materials -- including AI-generated code -- as their property." OpenAI (the company behind ChatGPT) does not claim ownership of generated content. According to their terms of service, "OpenAI hereby assigns to you all its right, title, and interest in and to Output." Clearly, though, if you're creating an application that uses code written by an AI, you'll need to carefully investigate who owns (or claims to own) what. For a view of code ownership outside the US, ZDNET turned to Robert Piasentin, a Vancouver-based partner in the Technology Group at McMillan LLP, a Canadian business law firm. He says that ownership, as it pertains to AI-generated works, is still an "unsettled area of the law." ... Piasenten says there may already be some UK case law precedent, based not on AI but on video game litigation. A case before the High Court (roughly analogous to the US Supreme Court) determined that images produced in a video game were the property of the game developer, not the player -- even though the player manipulated the game to produce a unique arrangement of game assets on the screen.


Supply Chain Risk Mitigation Must Be a Priority in 2025

Implementing impactful supply chain protections is far easier said than accomplished, due to the complexity, scale, and integration of modern supply chain ecosystems. While there isn't a silver bullet for eradicating threats entirely, prioritizing a targeted focus on effective supply chain risk management principles in 2025 is a critical place to start. It will require an optimal balance of rigorous supplier validation, purposeful data exposure, and meticulous preparation. ... As supply chain attacks accelerate, organizations must operate under the assumption that a breach isn't just possible — it's probable. An "assumption of breach" mindset shift will help drive more meticulous approaches to preparation via comprehensive supply chain incident response and risk mitigation. Preparation measures should begin with developing and regularly updating agile incident response processes that specifically cater to third-party and supply chain risks. For effectiveness, these processes will need to be well-documented and frequently practiced through realistic simulations and tabletop exercises. Such drills help identify potential gaps in the response strategy and ensure that all team members understand their roles and responsibilities during a crisis.


The End of Bureaucracy — How Leadership Must Evolve in the Age of Artificial Intelligence

AI doesn't just optimize — it transforms. It flattens hierarchies, demands transparency and dismantles traditional power structures. For those managers who thrive on gatekeeping, AI represents a fundamental threat, eliminating barriers they've spent careers building. Consider this: AI thrives on efficiency, speed and clarity. Tasks that once consumed hours of human effort — like vetting vendor contracts or managing customer service inquiries — are now handled instantly by AI systems. Employees can experiment with bold ideas without wading through endless committee approvals. But the true power of AI lies in decentralizing decision-making. By analyzing vast datasets, AI equips frontline employees with actionable insights that previously required executive oversight. This creates organizations that are faster, more agile and less dependent on gatekeepers. ... In an AI-first world, hierarchies will begin to collapse as real-time data eliminates the need for multiple layers of oversight, enabling faster and more efficient decision-making. At the same time, workflows will be reimagined as leaders take on the critical task of redesigning processes to seamlessly integrate AI, ensuring organizations can adapt quickly and effectively.


GAO report says DHS, other agencies need to up their game in AI risk assessment

The GAO said it is “recommending that DHS act quickly to update its guidance and template for AI risk assessments to address the remaining gaps identified in this report.” DHS, in turn, it said, “agreed with our recommendation and stated it plans to provide agencies with additional guidance that addresses gaps in the report including identifying potential risks and evaluating the level of risk.” ... AI, he said, “is being pushed out to businesses and consumers by organizations that profit from doing so, and assessing and addressing the potential harm it may cause has until recently been an afterthought. We are now seeing more focus on these potential negative effects, but efforts to contain them, let alone prevent them, will always be far behind the steamroller of new innovations in the AI realm.” Thomas Randall, research lead at Info-Tech Research Group, said, “it is interesting that the DHS had no assessments that evaluated the level of risk for AI use and implementation, but had largely identified mitigation strategies. What this may mean is the DHS is taking a precautionary approach in the time it was given to complete this assessment.” Some risks, he said, “may be identified as significant enough to warrant mitigation regardless of precise quantification of that risk. 


How CI/CD Helps Minimize Technical Debt in Software Projects

One of the foundational principles of CI/CD is the enforcement of automated testing. Automated tests, such as unit tests, integration tests, and end-to-end tests, ensure that code changes do not break existing functionality. By integrating testing into the CI pipeline, developers are alerted to issues immediately after they commit code. ... CI/CD pipelines facilitate incremental and iterative development by encouraging small, frequent code commits. Large, monolithic changes often introduce complexity and technical debt because they are harder to test, debug, and review effectively. ... Technical debt often arises from manual processes that are error-prone and time-consuming. CI/CD eliminates many of these inefficiencies by automating repetitive tasks, such as building, testing, and deploying applications. Automation ensures that these steps are performed consistently and accurately, reducing the risk of human error. ... Code reviews are a critical component of maintaining high-quality software. CI/CD tools enhance the code review process by providing automated feedback on every commit. This feedback loop fosters a culture of accountability and continuous improvement among developers.


Cost-conscious repatriation strategies

First, this is not a pushback on cloud technology as a concept; cloud works and has worked for the past 15 years. This repatriation trend highlights concerns about the unexpectedly high costs of cloud services, especially when enterprises feel they were promised lowered IT expenses during the earlier “cloud-only” revolutions. Leaders must adopt a more strategic perspective on their cloud architecture. It’s no longer just about lifting and shifting workloads into the cloud; it’s about effectively tailoring applications to leverage cloud-native capabilities—a lesson GEICO learned too late. A holistic approach to data management and technology strategies that aligns with an organization’s unique needs is the path to success and lower bills. Organizations are now exploring hybrid environments that blend public cloud capabilities with private infrastructure. A dual approach, which is nothing new, allows for greater data control, reduced storage and processing costs, and improved service reliability. Weekly noted that there are ways to manage capital expenditures in an operational expense model through on-premises solutions. On-prem systems tend to be more predictable and cost-effective over time.


Cyber Resilience: Adapting to Threats in the Cloud Era

Use cloud-native security solutions that offer automated threat detection, incident response, and monitoring. These technologies ought to be flexible enough to adjust to changes in the cloud environment and defend against new risks as they arise. ... Effective cyber resilience plans enable businesses to recover quickly from emergencies by reducing downtime and maintaining continuous service delivery. Businesses that put flexibility first can manage emergencies with few problems, which helps them keep the confidence and trust of their clients. Cyber resilience strongly emphasizes flexibility, enabling companies to address new risks in the ever-evolving digital environment. Businesses can lower financial losses and safeguard their reputation by concentrating on data protection and breach remediation. Finding and fixing common setup mistakes in cloud systems that could lead to security issues and data breaches requires using Cloud Security Posture Management (CSPM) tools. ... Because criminals frequently use these configuration errors to cause data breaches and security errors, it is essential to identify them. Organizations may monitor their cloud environments and ensure that settings follow security best practices and regulations by using CSPM solutions. 



Quote for the day:

"Listen with curiosity, speak with honesty act with integrity." -- Roy T Bennett

Daily Tech Digest - December 21, 2024

The New Paradigm – The Rise of the Virtual Architect

We’re on the brink of a new paradigm in Enterprise Architecture—one where architects will have unprecedented access to knowledge, insights, and tools through what I call the Virtual Architect. The Virtual Architect isn’t limited to financial services. I’ve seen interest across industries like insurance and telecoms, where clients are eager to deploy such solutions. Why? Because it promises to provide accurate, real-time information, support colleagues, and even generate designs. Yes, you read that right—design generation is on the table. Naturally, this raises a big question: does this mean architects will be replaced? We’ll get to that in a moment. ... But here’s the catch: how do we ensure the designs generated by a Virtual Architect are accurate? The old saying applies—it’s only as good as the quality of the data and designs you feed in. That is where ongoing training and validation from architects remain crucial. So, will the Virtual Architect replace human architects? I don’t believe so, not in the near future. Designing systems is just one aspect of an architect’s role. Stakeholder engagement, strategic thinking, and soft skills are equally important—and these are areas where AI still falls short. For now, the Virtual Architect is an enhancement, not a replacement. 


IT/OT convergence propels zero-trust security efforts

Companies want flexibility in how end users and business applications access and interact with OT systems. ... Enterprises also want to extract data from OT systems, which requires network connectivity. For example, manufacturers can pull real-time data from their assembly lines so that specialized analytics applications can identify opportunities for efficiency and predict disruptions to production. While converging OT onto IT networks can drive innovation, it exposes OT systems to the threats that proliferate the digital world. Companies often need new security solutions to protect OT. EMA’s latest research report, “Zero Trust Networking: How Network Teams Support Cybersecurity,” revealed that IT/OT convergence drives 38% of enterprise zero-trust security strategies. ... IT/OT convergence leads enterprises to set different priorities for zero-trust solution requirements. When modernizing secure remote access solutions for zero trust, OT-focused companies have a stronger need for granular policy management capabilities. These companies are more likely to have a secure remote access solution that can cut off network access in response to anomalous behavior or changes in the state of a device. When implementing zero-trust network segmentation, OT-focused companies are more likely to seek a solution with dynamic and adaptive segmentation controls. 


Why Enterprises Still Grapple With Data Governance

“Even in highly regulated industries where the acceptance and understanding of the concept and value of governance more broadly are ingrained into the corporate culture, most data governance programs have progressed very little past an expensive [check] boxing exercise, one that has kept regulatory queries to a minimum but returned very little additional business value on the investment,” says Willis in an email interview. ... Why the disconnect? Data teams don’t feel they can spend time understanding stakeholders or even challenging business stakeholder needs. Though executive support is critical, data governance professionals are not making the most out of that support. One often unacknowledged problem is culture. “Unfortunately, in many organizations, the predominant attitude towards governance and risk management is that [they are] a burden of bureaucracy that slows innovation,” says Willis. “Data governance teams too frequently perpetuate that mindset, over-rotating on data controls and processes where the effort to execute is misaligned to the value they release.” One way to begin improving the effectiveness of data governance is to reassess the organization’s objectives and approach.


What Is Next-Generation Data Protection and Why Should Enterprise Tech Buyers Care?

Next-generation data protection was created to combat today’s most sophisticated and dangerous cyberattacks. It expands the purview of what is protected and how it is protected within an enterprise data infrastructure. This new approach also adds preemptive and predictive capabilities that help mitigate the effects of massive cyberattacks. Moreover, next-generation data protection is the last line of defense against the most vicious, unscrupulous cyber criminals who want nothing more than to take down and harm large companies, either for monetary gain or respect amongst fellow criminals. Therefore, understanding and implementing next-generation data protection is vital. ... To make data protection highly effective today for the datasets that seem most critical, it has to be highly integrated and orchestrated. You don’t want a manual process making a weak spot for your organization. To resolve this issue, one of the breakthrough capabilities of next-generation data protection is automated cyber protection. Automated cyber protection seamlessly integrates cyber storage resilience into a cyber security operation center (SOC) and data center-wide cyber security applications, such as SIEM and SOAR cyber applications. 


Federal Cyber Operations Would Downgrade Under Shutdown

The pending shutdown could trigger major cutbacks to critical technology services across the federal government, including DHS's Science and Technology Directorate, which provides technical expertise to address emerging threats impacting DHS, first responders and private sector organizations. During a lapse in appropriations, just 31 of its staff members would be retained, representing a staggering 94% reduction in its workforce. The shutdown could lead to longer airport lines, furloughs for hundreds of thousands of federal workers. Brian Fox, CTO of software supply chain management firm Sonatype, previously told Information Security Media Group that CISA plays a critical role in safeguarding government infrastructure during periods of political turbulence. "It's no secret that times of uncertainty, change and disruption are prime opportunities for threat actors to increase efforts to infiltrate systems," Fox said. The shutdown is set to begin at 12:01 a.m. on Saturday, December 21, unless lawmakers can pass a short-term spending bill, after the House rejected a compromise package Thursday night following online remarks from President-elect Donald Trump and his billionaire government efficiency advisor, Elon Musk.


Why cybersecurity is critical to energy modernization

Connected infrastructures for renewables, in many cases, are operated by new companies or even residential users. They don’t have a background in managing reliability and, generally, have very limited or no cybersecurity expertise. Despite this, they all oversee internet-connected systems that are digitally controlled and therefore vulnerable to hacking. The cumulated power controlled by many connected parties also poses a risk of blackouts. The concern is about the suppliers, especially for consumer equipment, as it is not possible to impose security regulations on consumers. The Cyber Resilience Act tries to address suppliers but is likely not sufficient. ... International collaboration is crucial in addressing the cybersecurity risks posed by interconnected energy grids. By sharing knowledge, harmonizing standards, and coordinating joint incident response efforts, countries can collectively enhance their preparedness and resilience. There are various formal international collaborations, such as ENTSO-E and the DSO Entity SEEG, coordination groups like WG8 in NIS, and partnerships between experts and authorities in groups like NCCS. International exercises led by organizations like ENISA and NATO further support these initiatives.


US Ban on TP-Link Routers More About Politics Than Exploitation Risk

While no researcher has called out a specific backdoor or zero-day vulnerability in TP-Link routers, restricting products from a country that is a political and economic rival is not unreasonable, says Thomas Pace, CEO of extended Internet of Things (IoT) security firm NetRise and a former head of cybersecurity for the US Department of Energy. ... Companies and consumers should do their due diligence, keep their devices up to date with the latest security patches, and consider whether the manufacturer of their critical hardware may have secondary motives, says Phosphorus Cybersecurity's Shankar. "The vast majority of successful attacks on IoT are enabled by preventable issues like static, unchanged default passwords, or unpatched firmware, leaving systems exposed," he says. "For business operators and consumer end-users, the key takeaway is clear: adopting basic security hygiene is a critical defense against both opportunistic and sophisticated attacks. Don’t leave the front door open." For companies worried about the origin of their networking devices or the security their supply chain, finding a trusted third party to manage the devices is a reasonable option. In reality, though, almost every device should be monitored and not trusted, says NetRise's Pace.


The Next Big Thing: How Generative AI Is Reshaping DevOps in the Cloud

One of the biggest impacts of AI on DevOps is in Continuous Integration and Continuous Delivery (CI/CD) pipelines. These pipelines help automate how code changes are managed and deployed to production environments. Automation in this area makes operations more efficient. However, as codebases grow and get more complex, these pipelines often need manual tuning and adjustments to run smoothly. AI impacts this by making pipelines smarter. It can analyze historical data, like build times, test results, and deployment patterns. By doing this, it can adjust how pipelines are set up to minimize bottlenecks and use resources better. For example, AI can decide which tests to run first. It chooses tests that are more likely to find bugs from code changes. This helps to speed up the process of testing and deploying code. ... Security has always been very important for cloud-native apps and DevOps teams. With Generative AI, we can now move from reactive to proactive when it comes to system vulnerabilities. Instead of just waiting for security issues to appear, AI helps DevOps teams spot and prevent potential risks ahead of time. AI-powered security tools can perform data analysis on a company’s cloud system. 


US order is a reminder that cloud platforms aren’t secure out of the box

Affected IT departments are ordered to implement a set of baseline configurations set out by the Secure Cloud Business Applications (SCuBA) project for certain software as a service (SaaS) platforms. So far, the directive notes, the only final configuration baseline set is for Microsoft 365. There is also a baseline configuration for Google Workspace listed on the SCuBA website that isn’t mentioned in this week’s directive. However, the order does say that in the future, CISA may release additional SCuBA Secure Configuration Baselines for other cloud products. When the baselines are issued, they will also will fall under the scope of this week’s directive. ... Coincidentally, the CISA directive comes the same week as CSO reported that Amazon has halted its deployment of M365 for a full year, as Microsoft tries to fix a long list of security problems that Amazon identified. A CISA spokesperson said he couldn’t comment on why the directive was issued this week, but Dubrovsky believes it’s “more of a generic warning” to federal departments, and not linked to an event. Asked how private-sector CISOs should secure cloud platforms, Dubrovsky said they should start with cybersecurity basics. That includes implementing tough identity and access management policies, including MFA, and performing network monitoring and alerting for abnormalities, before going into the cloud.


The value of generosity in leadership

For the first time we have five generations in the workforce, which means that needs, priorities, and sources of meaning vary. Generosity becomes much more important because you cannot achieve everything by yourself. You can only do that by empowering others and giving them the tools, opportunities, and trust they need to succeed. And then, hopefully, they can together fulfill the organization’s purpose, objectives, and dreams. ... The opposite of a generous leader is a narcissistic leader, who is focused on themselves. Narcissistic leaders are not as effective as leaders who have higher EQs [emotional quotients], who are more generous and recognize that the team’s performance is a result of something beyond themselves. But for one reason or another, narcissistic leaders continue to rise to the top. ... That link between being generous with yourself and being generous with others is so important. When I’ve seen leaders really unlock a new level of leadership, and generosity in leadership, it comes from first and foremost understanding how to lead themselves, and specifically, how to control the amygdala hijack that can send you below the line. Those are very real physiological tendencies that can create what appears to be a zero-sum context based on winning and losing. 



Quote for the day:

"Small daily imporevement over time lead to stunning results." -- Robin Sherman