Showing posts with label emotional intelligence. Show all posts
Showing posts with label emotional intelligence. Show all posts

Daily Tech Digest - January 25, 2026


Quote for the day:

"Life is 10% what happens to me and 90% of how I react to it." -- Charles Swindoll



Agentic AI exposes what we’re doing wrong

What needs to change is the level of precision and adaptability in network controls. You need networking that supports fine-grained segmentation, short-lived connectivity, and policies that can be continuously evaluated rather than set once and forgotten. You also need to treat east-west traffic visibility as a core requirement because agents will generate many internal calls that look legitimate unless you understand intent, identity, and context. ... When the user is an autonomous agent, control relies solely on identity: what the agent is, its permitted actions, what it can impersonate, and what it can delegate. Network location and static IP-based trust weaken when actions are initiated by software that can run anywhere, scale instantly, and change execution paths. This is where many enterprises will stumble.  ... The old finops playbook of tagging, showback, and monthly optimization is not enough on its own. You need near-real-time cost visibility and automated guardrails that stop waste as it happens, because “later” can mean “after the budget is gone.” Put differently, the unit economics of agentic systems must be designed, measured, and controlled like any other production system, ideally more aggressively because the feedback loop is faster. ... The industry’s favorite myth is that architecture slows innovation. In reality, architecture prevents innovation from turning into entropy. Agentic AI accelerates entropy by generating more actions, integrations, permissions, data movement, and operational variability than human-driven systems typically do.


‘Cute’ and ‘Criminal’: AI Perception, Human Bias, and Emotional Intelligence

Can you build artificial intelligence (AI) without emotional intelligence (EI)? Should you? What do we mean when we talk about “humans in the loop”? Are we asking the right questions about how humans design and govern “thinking” machines? One of the immediate problems we face with generative AI is that people increasingly rely on them for big decisions. I won’t call all of these ethical decisions, but in some cases they’re consequential decisions. And many users forget that these systems are trained on data that carry all kinds of inherited biases. When we talk about AI bias, it isn’t always abstract. It shows up in very literal assumptions the models make when they are asked to generate images or ideas. ... That question is really the beginning of understanding how these systems work. They are pulling from enormous bodies of unlabeled or inconsistently labeled data and then inferring patterns. We often forget that the inferences are statistical, not conceptual. To the model, “doctor” aligns with “male” because that’s the pattern the dataset reinforced. ... I didn’t tell the system, “diverse audience,” then all the children it generated fell into the same narrow “cute child” category. It’s not that the AI systems are racist or sexist. They simply don’t have self-awareness. They’re reflecting the dominant patterns in the datasets they learned from. But reflection without critique becomes reinforcement, and reinforcement becomes norm.


AI is quietly poisoning itself and pushing models toward collapse - but there's a cure

According to tech analyst Gartner, AI data is rapidly becoming a classic Garbage In/Garbage Out (GIGO) problem for users. That's because organizations' AI systems and large language models (LLMs) are flooded with unverified, AI‑generated content that cannot be trusted. ... You know this better as AI slop. While annoying to you and me, it's deadly to AI because it poisons the LLMs with fake data. The result is what's called in AI circles "Model Collapse." AI company Aquant defined this trend: "In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality." ... The analyst argued that enterprises can no longer assume data is human‑generated or trustworthy by default, and must instead authenticate, verify, and track data lineage to protect business and financial outcomes. Ever try to authenticate and verify data from AI? It's not easy. It can be done, but AI literacy isn't a common skill. ... This situation means that flawed inputs can cascade through automated workflows and decision systems, producing worse results. Yes, that's right, if you think AI result bias, hallucinations, and simple factual errors are bad today, wait until tomorrow. ... Gartner suggested many companies will need stronger mechanisms to authenticate data sources, verify quality, tag AI‑generated content, and continuously manage metadata so they know what their systems are actually consuming.


4 Realities of AI Governance

AI has not replaced traditional security work; it has layered new obligations on top of it. We still have to protect our data and maintain sovereign assurance through independent audit reports, whether that’s SOC, PCI, ISO, or other standards. Still, we must today also guide our own teams and vendors on the use of powerful AI tools. That’s where accountability begins: with the human or process that touches the data. When the rules are clear, people move faster and safer; when directives are fuzzy, everything downstream is too—so we keep policy short, plain, and visible. ... Unless the contract says otherwise, assume prompts, outputs, or telemetry may be retained for “service improvement.” Fine-print phrases like “continuous improvement” often mean that inputs, outputs, or telemetry can be retained or used to tune systems unless you opt out. To keep reviews consistent, leverage resources like the NIST AI Risk Management Framework. It provides practical checklists for transparency, accountability, and monitoring. Remember the AI supply chain: your vendor depends on model providers, plugins, and open-source components; your risk includes their dependencies, so cover these in your TPRM process. ... Boundaries are the difference between safe speed and reckless speed. Start by defining a short set of data types that must never be pasted into external tools: regulated PII, confidential customer data, unreleased financials, source code, or merger and acquisition materials. Map the rest into simple classes-public, internal, sensitive-and tie each class to approved tools and use cases.


Your Cache is Hiding a Bad Architecture

Most engineers treat caching as a performance optimisation. They see a complex SQL query involving four joins taking 2 seconds to execute. Instead of analysing the execution plan or restructuring the schema, they wrap the call in a redis.get() block. ... By relying on the cache to mask inefficient database interactions, you haven’t fixed the bottleneck; you have simply hidden it behind a volatile memory store. You have turned a “nice-to-have” performance layer into a Critical Infrastructure Dependency. The moment that the cache key expires, or the Redis node evicts the key to free up memory, the application is forced to confront the reality of that 2-second query. And usually, it doesn’t confront it alone. It confronts it with 500 concurrent users who were all waiting for that key. ... Caching is not a strategy; it is a tactic. It is a powerful optimisation for systems that are already healthy, but it is a disastrous life-support system for those that are not. If you take nothing else from this, remember the litmus test: System stability should not depend on volatile memory. Go back to your codebase. Turn off Redis in your staging environment. Run your load tests. If your response times go up, you have a performance problem. If your error rates go up, you have an architectural problem.


UK bill accelerates shift to offensive cyber security

The Cyber Security and Resilience (Network and Information Systems) Bill entered Parliament in late 2025 and is expected to move through the legislative process during 2026. The government has positioned the bill as a major update to the UK's cyber framework for essential services and digital service providers. ... Poyser argued that many companies still lean heavily on defensive tools without validating how those controls perform under attack conditions. "Cybercriminals and state-backed threat actors are acting faster, more aggressively, and with far greater innovation-especially through the use of artificial intelligence-while too many businesses continue to rely on traditional defensive methods. This widening gap must be closed urgently," said Poyser. He also linked the coming UK legislative changes to a push for more proactive security validation. ... The company said this attacker-style approach changes how risk gets measured and prioritised. It said corporate security teams struggle to maintain an accurate picture of exposure through passive controls and periodic checks. "It is increasingly unrealistic for corporate security teams to maintain an accurate understanding of their true risk exposure using only traditional, passive methods," said Keith Poyser. "Threat actors do not wait for annual audits or one-off checks. Unless organisations test their systems in a way that reflects how real attackers operate, they will continue to be caught off-guard," said Poyser.


The new CDIO stack: Tech, talent and storytelling

The first layer is the one everyone ‘expects’. We built strong platforms: cloud infrastructure that can flex with the business, data platforms that bring together information from plants, systems and markets, analytics and AI capabilities that sit on top of that data, and a solid cyber posture to protect all of it. ... The second layer was not about machines at all. It was about people, about changing the talent mix so that digital is no longer “their” thing — it becomes “our” thing. We realised that if we kept thinking in terms of “IT people” and “business people”, we would always be negotiating across a wall. ... The third layer is the one that surprised even me. We noticed a pattern. Even when we had good platforms and strong talent, some initiatives would start with a bang and fizzle out. The technology worked. The pilot results were good. But momentum died. When we dug deeper, we realised the issue was not in the code. It was in the story. The operators on the shop floor, the sales teams, the plant heads and the board were all hearing slightly different stories about “digital”. ... Yes, I am responsible for technology. If the platforms are not robust, I have failed at the most basic level. Yes, I am responsible for talent. If we don’t have the right mix of skills — product, data, architecture, change — we cannot deliver. But I am also responsible for the narrative. ... For me, the real maturity of a digital organization shows when these three layers are aligned.


What Software Developers Need to Know About Secure Coding and AI Red Flags

The uptick in adoption of AI tools within the developer community aligns with growing expectations. Developers are now expected to work with greater efficiency to meet deadlines more quickly, all while delivering high-quality code. Developers might find AI assistants to be beneficial as they are immune to human-based tendencies like fatigue and biases, which can boost efficiency. But sacrificing safety for speed is unacceptable, as AI tools bring inherent risks of compromise. ... AI tools are not safe for enterprise use unless the code output is reviewed and implemented by a security-proficient human. 30% of security experts admit that they don't trust the accuracy of code generated by AI itself. That's why security leaders must prioritize the education and upskilling of developer teams, to ensure they have the necessary skills and capabilities to mitigate AI-assisted code vulnerabilities as early as possible. This will lead to the cultivation of a "security first" team culture and safer AI use. ... In addition, agentic AI introduces new or "agentic variations" of existing threats, like memory poisoning, remote code execution (RCE) and code attacks. It can harm code via logic errors, which cause the product to "run" correctly but act incorrectly; style inconsistencies, which result in patterns that do not align with the current, required structure; and lenient permissions, which act correctly but lack the authorization context to determine if an end user is allowed to perform a particular action.


Building a Self-Healing Data Pipeline That Fixes Its Own Python Errors

The core concept of this is relatively simple. Most data pipelines are fragile because they assume the world is perfect, and when the input data changes even slightly, they fail. Instead of accepting that crash, I designed my script to catch the exception, capture the “crime scene evidence”, which is basically the traceback and the first few lines of the file, and then pass it down to an LLM. ... The primary challenge with using Large Language Models for code generation is their tendency to hallucinate. From my experience, if you ask for a simple parameter, you often receive a paragraph of conversational text in return. To stop that, I leveraged structured outputs via Pydantic and OpenAI’s API. This forces the model to complete a strict form, acting as a filter between the messy AI reasoning and our clean Python code. ... Getting the prompt right took some trial and error. And that’s because initially, I only provided the error message, which forced the model to guess blindly at the problem. I quickly realized that to correctly identify issues like delimiter mismatches, the model needed to actually “see” a sample of the raw data. Now here is the big catch. You cannot actually read the whole file. If you try to pass a 2GB CSV into the prompt, you’ll blow up your context window and apparently your wallet. ... First, remember that every time your pipeline breaks, you are making an API call.


‘Complexity is where cyber risk tends to grow’

Last month, the Information Systems Audit and Control Association (ISACA) announced that it had been appointed to lead the global credentialing programme for the US Department of War’s (DoW) Cybersecurity Maturity Model Certification (CMMC). The CMMC, according to ISACA’s chief global strategy officer Chris Dimitriadis, is “designed to protect sensitive information across the defence industrial base and its supply chain”. ... “Transatlantic operations almost always increase complexity, and complexity is where cyber risk tends to grow,” he says. “The first major issue is supply chain exposure. Attackers rarely go after the strongest link, they look for the most vulnerable one. “In global ecosystems, that can be a smaller supplier, a service provider or a subcontractor.” The second issue, he says, is the “nature” of the data and the systems that are involved. “When defence-related information, controlled technical data, or sensitive operational systems are in play, the impact of compromise is simply much higher. That requires stronger access controls, better identity governance, and more disciplined incident response.” The third and final issue that Dimitriadis highlights is “multi-jurisdiction reality”. He explains that companies need to navigate different requirements, obligations and reporting expectations across regions, adding that if governance and security operations aren’t aligned, “you create gaps, and those gaps are exactly what threat actors exploit”.

Daily Tech Digest - March 20, 2025


Quote for the day:

"We get our power from the people we lead, not from our stars and our bars." -- J. Stanford



Agentic AI — What CFOs need to know

Agentic AI takes efficiency to the next level as it builds on existing AI platforms with human-like decision-making, relieving employees of monotonous routine tasks, allowing them to focus on more important work. CFOs will be happy to know that like other forms of AI, agentic is scalable and flexible. For example, organizations can build it into customer-facing applications for a highly customized experience or sophisticated help desk. Or they could embed agentic AI behind the scenes in operations. ... Not surprisingly, like other emerging technologies, agentic AI requires thoughtful and strategic implementation. This means starting with process identification and determining which specific process or functions are suitable for agentic AI. Business leaders also need to determine organizational value and impact and find ways to evaluate and measure to ensure the technology is delivering clear benefits. Companies should also be mindful of team composition, and, if necessary, secure external experts to ensure successful implementation. Beyond the technical feasibility, there are other considerations such as data security. For now, CFOs and other business leaders need to wrap their heads around the concept of “agents” and keep their minds open to how this powerful technology can best serve the needs of their organization. 


5 pitfalls that can delay cyber incident response and recovery

For tabletop exercises to be truly effective they must have internal ownership and be customized to the organization. CISOs need to ensure that tabletops are tailored to the company’s specific risks, security use cases and compliance requirements. Exercises should be run regularly (quarterly, at a minimum) and evaluated with a critical eye to ensure that outcomes are reflected in the company’s broader incident response plan. ... One of the most common failures in incident response is a lack of timely information sharing. Key stakeholders, including HR, PR, Legal, executives and board members must be kept informed about the situation in real time. Without proper communication channels and predefined reporting structures, misinformation or delays can lead to confusion, prolonged downtime and even regulatory penalties for failure to report incidents within required timeframes. CISOs are responsible for proactively establishing clear communication protocols and ensuring that all responders and stakeholders understand their role in incident management. ... Out-of-band communication capabilities are critical for safeguarding response efforts and shielding them from an attacker’s view. Organizations should establish secure, independent channels for coordinating incident response that aren’t tied to corporate networks. 


Bringing Security to Digital Product Design

We are aware that prioritizing security is a common challenge. Even though it is a critical issue, most leaders behind the development of new products are not interested in prioritizing this type of matter. Whenever possible, they try to focus the team's efforts on features. For this reason, there is often no room for this type of discussion. So what should we do? Fortunately, there are multiple possible solutions. One way to approach the topic is to take advantage of the opportunity of a collaborative and immersive session such as product discovery. ... Usually, in a product discovery session, there is a proposed activity to map personas. To map this kind of behavior, I recommend using the same persona model that is suggested. From there, go deeper into hostility characteristics in sections such as bio, objectives, interests, and frustrations, as in the figure above. After the personas have been described, it is important to deepen the discussion by mapping journeys. The goal here is to identify actions and behaviors that provide ideas on how to correctly deal with threats. Remember that when using an assailant actor, the materials should be written from its perspective. ... Complementing the user journey with likely attacker actions is another technique that helps software development teams map, plan, and address security as early as possible. 


From Cloud Native to AI Native: Lessons for the Modern CISO to Win the Cybersecurity Arms Race

Today, CISOs stand at another critical crossroads in security operations: the move from a “Traditional SOC” to an “AI Native SOC.” In this new reality, generative AI, machine learning and large-scale data analytics power the majority of the detection, triage and response tasks once handled by human analysts. Like Cloud Native technology before it, AI Native security methods promise profound efficiency gains but also necessitate a fundamental shift in processes, skillsets and organizational culture.  ... For CISOs, transitioning to an AI Native SOC represents a massive opportunity—akin to how CIOs leveraged DevOps and cloud-native to gain a competitive edge:  Strategic Perspective: CISOs must look beyond tool selection to organizational and cultural shifts. By championing AI-driven security, they demonstrate a future-ready mindset—one that’s essential for keeping up with advanced adversaries and board-level expectations around cyber resilience.  Risk Versus Value Equation: Cloud-native adoption taught CIOs that while there are upfront investments and skill gaps, the long-term benefits—speed, agility, scalability—are transformative. In AI Native security, the same holds true: automation reduces response times, advanced analytics detect sophisticated threats and analysts focus on high-value tasks.  


Europe slams the brakes on Apple innovation in the EU

With its latest Digital Markets Act (DMA) action against Apple, the European Commission (EC) proves it is bad for competition, bad for consumers, and bad for business. It also threatens Europeans with a hitherto unseen degree of data insecurity and weaponized exploitation. The information Apple is being forced to make available to competitors with cynical interest in data exfiltration will threaten regional democracy, opening doors to new Cambridge Analytica scandals. This may sound histrionic. And certainly, if you read the EC’s statement detailing its guidance to “facilitate development of innovative products on Apple’s platforms” you’d almost believe it was a positive thing. ... Apple isn’t at all happy. In a statement, it said: “Today’s decisions wrap us in red tape, slowing down Apple’s ability to innovate for users in Europe and forcing us to give away our new features for free to companies who don’t have to play by the same rules. It’s bad for our products and for our European users. We will continue to work with the European Commission to help them understand our concerns on behalf of our users.” There are several other iniquitous measures contained in Europe’s flawed judgement. For example, Apple will be forced to hand over access to innovations to competitors for free from day one, slowing innovation. 


The Impact of Emotional Intelligence on Young Entrepreneurs

The first element of emotional intelligence is self-awareness which means being able to identify your emotions as they happen to understand how they affect your behavior. During the COVID-19 pandemic, I often felt frustrated when my sales went down during the international bookfair. But by practicing self-awareness, I was able to acknowledge the frustration and think about its sources instead of letting it lead to impulsive reactions. Being self-aware helps me to stay in control of  actions and make decisions that align with my values. So the solution back then was to keep pushing sales through my online platform instead of showing up in person as I realized that people were still in lockdown due to the pandemic.   Self-recognition is another important aspect of emotional intelligence. While self-awareness is about recognizing emotions, self-regulation focuses on managing how you respond to them. Self-regulation doesn't mean ignoring your emotions but learning to express them in a constructive way. Imagine a situation where you feel angry after receiving negative feedback. Instead of reacting defensively or shouting, self-recognition allows you to take a step back, consider the feedback calmly, and respond appropriately. 


Bridging the Gap: Integrating All Enterprise Data for a Smarter Future

To bridge the gap between mainframe and hybrid cloud environments, businesses need a modern, flexible, technology-driven strategy — one that ensures they can access, analyze, and act on their data without disruption. Rather than relying on costly, high-risk "rip-and-replace" modernization efforts, organizations can integrate their core transactional data with modern cloud platforms using automated, secure, and scalable solutions capable of understanding and modernizing mainframe data. One of the most effective methods is real-time data replication and synchronization, which enables mainframe data to be continuously updated in hybrid cloud environments in real time. Low-impact change data capture technology recognizes and replicates only the modified portions of datasets, reducing processing overhead and ensuring real-time consistency across both mainframe and hybrid cloud systems. Another approach is API-based integration, which allows organizations to provide mainframe data as modern, cloud-compatible services. This eliminates the need for batch processing and enables cloud-native applications, AI models, and analytics platforms to access real-time mainframe data on demand. API gateways further enhance security and governance, ensuring only authorized systems can interact with sensitive transactional business data.


How CISOs are approaching staffing diversity with DEI initiatives under pressure

“In the end, a diverse, engaged cybersecurity team isn’t just the right thing to build — it’s critical to staying ahead in a rapidly evolving threat landscape,” he says. “To fellow CISOs, I’d say: Stay the course. The adversary landscape is global, and so our perspective should be as well. A commitment to DEI enhances resilience, fosters innovation, and ultimately strengthens our defenses against threats that know no boundaries.” Nate Lee, founder and CISO at Cloudsec.ai, says that even if DEI isn’t a specific competitive advantage — although he thinks diversity in many shapes is — it’s the right thing to do, and “weaponizing it the way the administration has is shameful.” “People want to work where they’re valued as individuals, not where diversity is reduced to checking boxes, but where leadership genuinely cares about fostering an inclusive environment,” he says. “The current narrative tries to paint efforts to boost people up as misguided and harmful, which to me is a very disingenuous argument.” ... “Diverse workforces make you stronger and you are a fool if you [don’t] establish a diverse workforce in cybersecurity. You are at a distinct disadvantage to your adversaries who do benefit from diverse thinking, creativity, and motivations.”


AI-Powered Cyber Attacks and Data Privacy in The Age of Big Data

Artificial intelligence significantly increased the capabilities of attackers to efficiently conduct cyber-attacks. This also increased their intelligence and the scale of the attacks. Compared to the traditional process of cyber-attacks, the attacks driven by AI have the capability to automatically learn, adapt, and develop strategies with a minimum number of human interventions. These attacks proactively utilize the algorithms of machine learning, natural language processing, and deep learning models. They leverage these algorithms in the process of determining and analyzing issues or vulnerabilities, avoiding security and detection systems, and developing phishing campaigns that are believable. ... AI has also significantly increased the intelligence of systems related to malware and autonomous hacking. These systems gained the capabilities to infiltrate networks, leverage the vulnerabilities of the system, and avoid detection systems. Malware driven by AI has the capability to make real-time modifications to its codes, unlike conventional malware. This significantly increases the difficulties in the detection and eradication process for the security software. These difficulties involve infiltration in systems powered by AI, such as polymorphic malware. It can convert its appearance based on the data collected from every attempt of cyber-attack. 


Platform Engineers Must Have Strong Opinions

Many platform engineering teams build internal developer platforms, which allow development teams to deploy their infrastructure with just a few clicks and reduce the number of issues that slow deployments. Because they are designing the underlying application infrastructure across the organization, the platform engineering team must have a strong understanding of their organization and the application types their developers are creating. This is also an ideal point to inject standards about security, data management, observability and other structures that make it easier to manage and deploy large code bases.  ... To build a successful platform engineering strategy, a platform engineering team must have well-defined opinions about platform deployments. Like pizza chefs building curated pizza lists based on expertise and years of pizza experience, the platform engineering team applies its years of industry experience in deploying software to define software deployments inside the organization. The platform engineering team’s experience and opinions guide and shape the underlying infrastructure of internal platforms. They put guardrails into deployment standards to ensure that the provided development capabilities meet the needs of engineering organizations and fulfill the larger organization’s security, observability and maintainability needs.

Daily Tech Digest - March 19, 2025


Quote for the day:

“The only true wisdom is knowing that you know nothing.” -- Socrates


How AI is Becoming More Human-Like With Emotional Intelligence

The concept of humanizing AI is designing systems that can understand, interpret, and respond to human emotions in a way that feels more natural. It is making the AI efficient enough to pick up cues to read the room and react as a human would but in a polished way. ... It is only natural that a potential user will prefer to interact with someone who acknowledges the queries and engages with them like a human. AI that sounds and responds like a human helps build trust and rapport with users. ... AI that adapts based on mood and tone. You cannot keep sending automated messages to your users, especially to the ones who are irate. AI that sounds and responds like a human helps build trust and rapport with users ... The humanization of AI makes AI accessible and inclusive to all. Voice assistants and screen readers, AI-powered speech-to-text, and text-to-speech tools are some great examples of these fleets. ... As AI becomes more aware and powerful there are rising concerns about its ethical usage. There have to be checks in place that ensure AI doesn’t blatantly mimic human emotions to exploit users’ feelings. There should be a trigger warning for the users to know that they are dealing with machine-generated content. Businesses must ensure ethical AI development, prioritizing user trust and transparency systems should be programmed to respect user privacy and not manipulate users into making purchases or conversions.


Beyond Trends: A Practical Guide to Choosing the Right Message Broker

In distributed systems, messaging patterns define how services communicate and process information. Each pattern comes with unique requirements, such as ordering, scalability, error handling, or parallelism, which guide the selection of an appropriate message broker. ... The Event-Carried State Transfer (ECST) pattern is a design approach used in distributed systems to enable data replication and decentralized processing. In this pattern, events act as the primary mechanism for transferring state changes between services or systems. Each event includes all the necessary information (state) required for other components to update their local state without relying on synchronous calls to the originating service. By decoupling services and reducing the need for real-time communication, ECST enhances system resilience, allowing components to operate independently even when parts of the system are temporarily unavailable. ... The Event Notification Pattern enables services to notify other services of significant events occurring within a system. Notifications are lightweight and typically include just enough information (e.g., an identifier) to describe the event. To process a notification, consumers often need to fetch additional details from the source (and/or other services) by making API calls. 


Successful AI adoption comes down to one thing: Smarter, right-size compute

A common perception in the enterprise is that AI solutions require a massive investment right out of the gate, across the board, on hardware, software and services. That has proven to be one of the most common barriers to adoption — and an easy one to overcome, Balasubramanian says. The AI journey kicks off with a look at existing tech and upgrades to the data center; from there, an organization can start scaling for the future by choosing technology that can be right-sized for today’s problems and tomorrow’s goals. “Rather than spending everything on one specific type of product or solution, you can now right-size the fit and solution for the organizations you have,” Balasubramanian says. “AMD is unique in that we have a broad set of solutions to meet bespoke requirements. We have solutions from cloud to data center, edge solutions, client and network solutions and more. ... While both hardware and software are crucial for tackling today’s AI challenges, open-source software will drive true innovation. “We believe there’s no one company in this world that has the answers for every problem,” Balasubramanian says. “The best way to solve the world’s problems with AI is to have a united front, and to have a united front means having an open software stack that everyone can collaborate on. ...”


CDOs: Your AI is smart, but your ESG is dumb. Here’s how to fix it

Embedding sustainability into a data strategy requires a deliberate shift in how organizations manage, govern and leverage their data assets. CDOs must ensure that sustainability considerations are integrated into every phase of data decision-making rather than treating ESG as an afterthought or compliance requirement. A well-designed strategy can help organizations balance business growth with environmental, social and governance (ESG) responsibility while improving operational efficiency. ... Advanced analytics and AI can unlock new opportunities for sustainability. Predictive modeling can help companies optimize energy consumption, while AI-driven insights can identify supply chain inefficiencies that lead to excessive waste. For example, retailers are leveraging AI-powered demand forecasting to reduce overproduction and excess inventory, significantly cutting down carbon emissions and waste.  ... Creating a sustainability-focused data culture requires education and engagement across all levels of the organization. CDOs can implement ESG-focused data literacy programs to ensure that business leaders, data scientists and engineers understand the impact of their work on sustainability. Encouraging collaboration between data teams and sustainability departments ensures ESG considerations remain a priority throughout the data lifecycle.


Five Critical Shifts for Cloud Native at a Crossroads

General-purpose operating systems can become a Kubernetes bottleneck at scale. Traditional OS environments are designed for a wide range of use cases, carry unnecessary overhead and bring security risks when running cloud native workloads. Enterprises are increasingly instead turning to specialized operating systems that are purpose-built for Kubernetes environments, finding that this shift has advantages across security, reliability and operational efficiency. The security implications are particularly compelling. While traditional operating systems leave many potential entry points exposed, specialized cloud native operating systems take a radically different approach. ... Cost-conscious organizations (Is there another kind?) are discovering that running Kubernetes workloads solely in public clouds isn’t always the best approach. Momentum has continued to grow toward pursuing hybrid and on-premises strategies for greater control over both costs and capabilities. This shift isn’t just about cost savings, it’s about building infrastructure precisely tailored to specific workload requirements, whether that’s ultra-low latency for real-time applications or specialized configurations for AI/machine learning workloads.


Moving beyond checkbox security for true resilience

A threat-informed and risk-based approach is paramount in an era of perpetually constrained cybersecurity budgets. Begin by assessing the organization’s crown jewels – sensitive customer data, intellectual property, financial records, or essential infrastructure. These assets represent the core of the organization’s value and should demand the highest priority in protection.... Organizations frequently underestimate the risks from unmanaged devices, also called shadow IT, and within their software supply chain. As reliance on third-party software and libraries embedded within the organization and in-house apps deepens, the attack surface becomes a constantly shifting landscape with hidden vulnerabilities. Unmanaged devices and unauthorized applications are equally problematic and can introduce unexpected and substantial risks. To address these blind spots, organizations must implement rigorous vendor risk management programs, track IT assets, and enforce application control policies. These often-overlooked elements create critical blind spots, allowing attackers to exploit vulnerabilities that existing security measures might miss. ... Regardless of the trends, CISOs should assess the specific threats relative to their organization and ensure that foundational security measures are in place.


How to simplify app migration with generative AI tools

Reviewing existing documentation and interviewing subject matter experts is often the best starting point to prepare for an application migration. Understanding the existing system’s business purposes, workflows, and data requirements is essential when seeking opportunities for improvement. This outside-in review helps teams develop a checklist of which requirements are essential to the migration, where changes are needed, and where unknowns require further discovery. Furthermore, development teams should expect and plan a change management program to support end users during the migration. ... Technologists will also want to do an inside-out analysis, including performing a code review, diagraming the runtime infrastructure, conducting a data discovery, and analyzing log files or other observability artifacts. Even more important may be capturing the dependencies, including dependent APIs, third-party data sources, and data pipelines. This architectural review can be time-consuming and often requires significant technical expertise. Using genAI can simplify and accelerate the process. “GenAI is impacting app migrations in several ways, including helping developers and architects answer questions quickly regarding architectural and deployment options for apps targeted for migration,” says Rob Skillington, CTO & co-founder of Chronosphere.


How to Stop Expired Secrets from Disrupting Your Operations

Unlike human users, the credentials used by NHIs often don’t receive expiration reminders or password reset prompts. When a credential quietly reaches the end of its validity period, the impact can be immediate and severe: application failures, broken automation workflows, service downtime, and urgent security escalations. And unlike the food in your fridge, there’s no nosy relative to point out that your secrets have gone bad. ... While TLS/SSL certificate expiration often gets the most attention due to its visible impact on websites, many types of machine credentials have built-in expiration. API keys silently time out in backend services, OAuth tokens reach their limits, IAM role sessions terminate, Kubernetes service account tokens expire, and database connection credentials become invalid. ... The primary consequence of an expired credential is a failed authentication attempt. At first glance, this might seem like a simple fix – just replace the credential and restart the service. But in reality, identifying and resolving an expired credential issue is rarely straightforward. Consider a cloud-native application that relies on multiple APIs, internal microservices, and external integrations. If an API key or OAuth token used by a backend service expires, the application might return unexpected errors, time out, or degrade in ways that aren’t immediately obvious. 


Role of Interconnects in GenAI

The emergence of High-Performance Computing (HPC) demanded a leap in interconnect capabilities. InfiniBand entered the scene, offering significantly higher throughput and lower latency compared to existing technologies. It became the cornerstone of data centers and large-scale computing environments, enabling the rapid exchange of massive datasets required for complex simulations and scientific computations. Simultaneously, the introduction of Peripheral Component Interconnect Express (PCIe) revolutionized off-chip communication. ... the scalability of GenAI models, particularly large language models, relies heavily on robust interconnects. These systems facilitate the distribution of computational load across multiple processors and machines, enabling the training and deployment of increasingly complex models. This scalability is achieved through efficient network topologies that minimize communication bottlenecks, allowing for both vertical and horizontal scaling. Parallel processing, a cornerstone of GenAI training, is also dependent on effective interconnects. Model and data parallelism require seamless communication and synchronization between processors working on different segments of data or model components. Interconnects ensure that these processors can exchange information efficiently, maintaining consistency and accuracy throughout the training process.


That breach cost HOW MUCH? How CISOs can talk effectively about a cyber incident’s toll

Many CISOs struggle to articulate the financial impact of cyber incidents. “The role of a CISO is really interesting and uniquely challenging because they have to have one foot in the technical world and one foot in the executive world,” Amanda Draeger, principal cybersecurity consultant at Liberty Mutual Insurance, tells CSO. “And that is a difficult challenge. Finding people who can balance that is like finding a unicorn.” ... Quantifying the costs of an incident in advance is an inexact art greatly aided by tabletop exercises. “The best way in my mind to flush all of this out is by going through a regular incident response tabletop exercise,” Gary Brickhouse, CISO at GuidePoint Security, tells CSO. “People know their roles so that when it does happen, you’re prepared.” It also helps to develop an incident response (IR) plan and practice it frequently. “I highly recommend having an incident response plan that exists on paper,” Draeger says. “I mean literal paper so that when your entire network explodes, you still have a list of phone numbers and contacts and something to get you started.” Not only does the incident response plan lead to better cost estimates, but it will also lead to a quicker return of network functions. “Practice, practice, practice,” Draeger says. 

Daily Tech Digest - December 16, 2024

What IT hiring looks like heading into 2025

AI isn’t replacing jobs so much as it is reshaping the nature of work, said Elizabeth Lascaze, a principal in Deloitte Consulting’s Human Capital practice. She, too, sees evidence that entry-level roles focused on tasks like note-taking or basic data analysis are declining as organizations seek more experienced workers for junior positions. “Today’s emerging roles require workers to quickly leverage data, generate insights, and solve problems,” she said, adding that those skilled in using AI, such as cybersecurity analysts applying AI for threat detection, will be highly sought after. Although the adoption of AI has led to some “growing pains,” many workers are actually excited about it, Lascaze said, with most employees believing it will create new jobs and enhance their careers. “Our survey found that just 24% of early career workers and 14% of tenured workers fear their jobs will be replaced by AI,” Lascaze said. “Tenured workers are more likely to lead organizational strategy, so they may prioritize AI’s potential to improve efficiency, sophistication, and work quality in existing roles rather than AI’s potential to eliminate certain positions. “These workers reported being slightly more focused on building AI fluency than early-career employees,” Lascaze said. 


The Future of AI (And Travel) Relies on Synthetic Data

Synthetic data enhances accuracy and fairness in AI models as organic data can be biased or unbalanced, leading to ML models failing to represent diverse populations accurately. With synthetic data, researchers can create datasets that more accurately reflect the demographics they intend to serve, thereby minimizing biases and improving overall model robustness. ... Synthetic data can be a double-edged sword. While it addresses data privacy and availability challenges, it can inadvertently carry or magnify biases embedded in the original dataset. When source data is flawed, those imperfections can cascade into the synthetic version, skewing results — a critical concern in high-stakes domains like healthcare and finance, where precision and fairness are paramount. To counteract this, having a human in the loop is super important. While there’s a temptation to use synthetic data to fill in every gap for better accuracy and fairness, we understood that running synthetic searches for every flight combination possible globally for our price tracking and predictions feature could overwhelm our booking system and impact real travelers organically searching for flights. Synthetic data has limitations that go beyond bias. 


9 Cloud Service Adoption Trends

Most organizations are building modern cloud computing applications to enable greater scalability while reducing cost and consumption costs. They’re also more focused on the security and compliance of cloud systems and how providers are validating and ensuring data protection. “Their main focus is really around cost, but a second focus would be whether providers can meet or exceed their current compliance requirements,” says Will Milewski, SVP of cloud infrastructure and operations at content management solution provider Hyland. ... There’s a fundamental shift in cloud adoption patterns, driven largely by the emergence of AI and ML capabilities. Unlike previous cycles focused primarily on infrastructure migration, organizations are now having to balance traditional cloud ROI metrics with strategic technology bets, particularly around AI services. According to Kyle Campos, chief technology and product officer at cloud management platform provider CloudBolt Software, this evolution is being catalyzed by two major forces: First, cloud providers are aggressively pushing AI capabilities as key differentiators rather than competing on cost or basic services. Second, organizations are realizing that cloud strategy decisions today have more profound implications for future innovation capabilities than ever before.


We’ve come a long way from RPA: How AI agents are revolutionizing automation

As the AI ecosystem evolves, a significant shift is occurring toward vertical AI agents — highly specialized AI systems designed for specific industries or use cases. As Microsoft founder Bill Gates said in a recent blog post: “Agents are smarter. They’re proactive — capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. “ Unlike traditional software-as-a-service (SaaS) models, vertical AI agents do more than optimize existing workflows; they reimagine them entirely, bringing new possibilities to life. ... The most profound shift in the automation landscape is the transition from RPA to multi-agent AI systems capable of autonomous decision-making and collaboration. According to a recent Gartner survey, this shift will enable 15% of day-to-day work decisions to be made autonomously by 2028. These agents are evolving from simple tools into true collaborators, transforming enterprise workflows and systems. ... As AI agents progress from handling tasks to managing workflows and entire jobs, they face a compounding accuracy challenge. Each additional step introduces potential errors, multiplying and degrading overall performance. 


8 reasons why digital transformations still fail

“People got really excited about, ‘We’re going to transform,’” Woerner says, but she believes part of the problem lies with leaders who “didn’t have the discipline to make the hard choices early on” to get employee buy-in. Ranjit Varughse, CIO of automotive paint and equipment firm Wesco Group, agrees. “The first challenge is getting digital transformation buy-in from teams at the outset. People are creatures of habit, making many hesitant to change their existing systems and processes,” he says. “Without a clear change management strategy to get a team aligned, ERP implementations in particular can be slow, stall, or even fail entirely.” ... Digital transformation isn’t a technology problem, it’s about understanding how people actually work, not how we think they should work, Wei says. “At PropertySensor, we scrapped our first version after realizing real estate agents needed mobile-first solutions, not desktop dashboards,” he says. ... “People, process, and technology” is a common phrase technology leaders use when discussing the critical elements of a transformation. “But the real focus should be people, people, people,” echoes Megan Williams, vice president of global technology strategy and transformation at TransUnion.


How companies can address bias and privacy challenges in AI models

Companies understand that AI adoption is existential to their survival, with the winners of tomorrow being determined by their ability to harness AI effectively. Furthermore, they understand that their brand’s reputation is one of their most valuable assets. Missteps with AI—especially in mission-critical contexts (think of a trading algorithm going awol, a breach of user privacy, or a failure to meet safety standards)—can erode public trust and harm a company’s bottom line. This could have dire consequences. With a company’s competitiveness and potentially its very survival at stake, AI governance becomes a business imperative that they cannot afford to ignore. ... Certainly, we see a lot of activity from the government – both at the state and federal levels – which is creating a fragmented approach. We also see leading companies who understand that adopting AI is crucial to their future and want to move fast. They are not waiting for the regulatory environment to settle and are taking a leadership position in adopting responsible AI principles to safeguard their brand reputations. So, I believe companies will act intelligently out of self-interest to accelerate their AI initiatives and increase business returns. 


Ensuring AI Accountability Through Product Liability: The EU Approach and Why American Businesses Should Care

In terms of a substantive law regulating AI (which can be the basis of the causality presumption under the proposed AI Liability Directive), the European Union’s Artificial Intelligence Act (AI Act) entered into force on August 1, 2024, becoming the first comprehensive legal framework for AI globally. The AI Act applies to providers and developers of AI systems that are marketed or used within the EU (including free-to-use AI technology), regardless of whether those providers or developers are established in the EU or a separate country. The EU AI Act sets forth requirements and obligations for developers and deployers of AI systems in accordance with risk-based classification system and a tiered approach to governance, which are two of the most innovative features of the AI Act. The Act classifies AI applications into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. AI systems deemed to pose an unacceptable risk, such as those that violate fundamental rights, are outright banned. ... High-risk AI systems, which include areas such as health care, law enforcement, and critical infrastructure, will face stricter regulatory scrutiny and must comply with rigorous transparency, data governance, and safety protocols. 


Agentic AI is evolving into specialised assistants, enabling the workforce to focus on value-adding tasks

A structured discovery approach is required to identify high impact areas for AI adoption rather than siloed use-cases. Infosys Topaz comprises verticalised blueprints, industry catalogues and strategic AI value map analysis capabilities. We have created playbooks for industries that lay out a structured roadmap to embed and mature GenAI into core processes and operations and across the IT landscape. This includes the right use-cases across the value stream spanning operations, customer experience, research and development, etc. As part of our Responsible AI by Design approach, we implement robust technical and process guardrails to ensure privacy and security. These include impact assessments, audits, automated policy enforcement, monitoring tools, and runtime safeguards to filter inputs and outputs for generative AI. We also use red-teaming and advanced testing tools to identify vulnerabilities and fortify AI models. Additionally, we employ privacy-preserving techniques such as Homomorphic Encryption and Secure Multi-Party Computation to enhance the security and resilience of our AI solutions. ... AI-driven monitoring tools detect inefficiencies in IT infrastructure, leveraging predictive analytics and forecasting techniques to improve utilisation in real time.


Security leaders top 10 takeaways for 2024

One of the most significant new rules, which has received the lion’s share of press attention, is the ‘materiality’ component, or the need to report “material” cybersecurity incidents to the SEC within four business days of discovery. At issue is whether the incident led to significant risk to the organization and its shareholders. If so, it’s defined as material and must be reported within four days of this determination being made (not its initial discovery). “Materiality extends beyond quantitative losses, such as direct financial impacts, to include qualitative aspects, like reputational damage and operational disruptions,” he says. McGladrey says the SEC’s materiality guidance underscores the importance of investor protection in relation to cybersecurity events and, if in doubt, the safest path is reporting. “If a disclosure is uncertain, erring on the side of transparency safeguards shareholders,” he tells CSO. ... As a virtual or fractional CISO service, Sage has observed startups engaging vCISO services earlier, in pre-seed and Series A stage and, in some cases, before they’ve finalized their minimum viable product. “Small technology consulting and boutique software development groups are looking for ISO 27001 certifications to ensure they can continue serving their larger customers,” she tells CSO.


Emotional intelligence in IT management: Impact, challenges, and cultural differences

While delivering results is the primary goal of any leader, you can’t forget that you’re managing people, not machines. Emotional intelligence helps balance the need for productivity with fairness and empathy. One way to illustrate this balance is through handling difficult conversations about career moves. Managing a team of over 100 support specialists for several years gave me the opportunity to conduct an interesting experiment. Many employees tend to hide the fact that they are exploring job opportunities elsewhere until the last minute. This creates unnecessary tension and can lead to higher turnover. However, if a manager removes the stigma around job interviews and treats them as part of market research, it encourages open communication. ... Emotionally intelligent managers possess the ability to identify the core of a conflict without letting it escalate. Attempting to gather every single piece of information is not always helpful. Instead, managers should focus on resolving conflicts, as often the solution is already within the team. This does not mean conducting surveys or asking for feedback from each person, as delicate situations require a more refined approach. A manager should observe, analyze, and extract the most significant points quickly and intuitively, enabling conflict resolution before it grows into a larger issue.



Quote for the day:

“Things come to those who wait, but only the things left by those who hustle” -- Abraham Lincoln

Daily Tech Digest - August 31, 2024

CTO to CTPO: Navigating the Dual Role in Tech Leadership

A competent CPTO can streamline processes, reduce the risk of misalignment, and offer a clear vision for both product and technology initiatives. This approach can also be cost-effective, as executive roles come with high salaries and significant demands. Combining these roles simplifies the organizational structure, providing a single point of contact for research and development. This works well in environments where product and technology are closely integrated and mature in the product and technology systems. In my role, most of my day-to-day activities are focused on the product. I’m very conscious that I don’t have a counterpart to challenge my thinking, so I spend a lot of time with senior business stakeholders to ensure the debates and discussions occur. I also encourage this in my leadership team to ensure that technology and product leaders are rigorous in their thinking and decision-making. Ultimately, deciding to have one or two roles for product and technology depends on a company’s specific needs, maturity, and strategic priorities. For some, clarity and focus come from having both a CPO and a CTO. For others, the simplicity and unified vision that comes from a single leader makes more sense.


How quantum computing could revolutionise (and disrupt) our digital world

Everything that is encrypted today could potentially be laid bare. Banking, commerce, and personal communications—all the pillars of our digital world—could be exposed, leading to consequences we’ve never encountered. Thankfully, Q-Day is estimated to be five to ten years away, mainly because building a stable quantum computer is fiendishly difficult. The processors need to be cooled to near absolute zero, among other technical challenges. But make no mistake—it’s coming. Sergio stressed that businesses and countries need to prepare now. Already, some groups are harvesting encrypted data with the intention of decrypting it when quantum computing capabilities mature. Much like the Y2K bug, Q-Day requires extensive preparation. This August, the National Institute of Standards and Technology (NIST) released the first set of post-quantum encryption standards designed to withstand quantum attacks. Similarly, the UK’s National Cyber Security Centre (NCSC) advises that migrating to post-quantum cryptography (PQC) is a complex, multi-year effort that requires immediate action.


Transparency is often lacking in datasets used to train large language models

Researchers often use a technique called fine-tuning to improve the capabilities of a large language model that will be deployed for a specific task, like question-answering. For finetuning, they carefully build curated datasets designed to boost a model’s performance for this one task. The MIT researchers focused on these fine-tuning datasets, which are often developed by researchers, academic organizations, or companies and licensed for specific uses. When crowdsourced platforms aggregate such datasets into larger collections for practitioners to use for fine-tuning, some of that original license information is often left behind. “These licenses ought to matter, and they should be enforceable,” Mahari says. For instance, if the licensing terms of a dataset are wrong or missing, someone could spend a great deal of money and time developing a model they might be forced to take down later because some training data contained private information. “People can end up training models where they don’t even understand the capabilities, concerns, or risk of those models, which ultimately stem from the data,” Longpre adds.


Cyber Insurance: A Few Security Technologies, a Big Difference in Premiums

Finding the right security technologies for the business is increasingly important, because ransomware incidents have accelerated over the past few years, says Jason Rebholz, CISO at Corvus Insurance, a cyber insurer. Attackers posted the names of at least 1,248 victims to leak sites in the second quarter of 2024, the highest quarterly volume to date, according the firm. ... "We take VPNs very seriously in how we price [our policies] and what recommendations we give to our companies ... and this is mostly related to ransomware," Itskovich says. For those reasons, businesses should take a look at their VPN security and email security, if they want to better secure their environments and, by extension, reduce their policy costs. Because an attacker will eventually find a way to compromise most companies, having a way to detect and respond to threats is vitally important, making managed detection and response (MDR) another technology that will eventually pay for itself, he says. ... For smaller companies, email security, cybersecurity-awareness training, and multi-factor authentication are critical, says Matthieu Chan Tsin, vice president of cybersecurity services for Cowbell. 


Cybersecurity for Lawyers: Open-Source Software Supply Chain Attacks

A supply chain attack co-opts the trust in the open-source development model to place malicious code inside the victim’s network or computer systems. Essentially, the attacker inserts malicious code, like a foodborne virus, into the software during its development process, positioning the malicious code to be unintentionally installed by the end user installing the software within their network. Any organization using the affected project has unwittingly invited the malicious code within its walls. Malicious code may already reside within a newly adopted OSS project, or it could be delivered via an updated version of a trusted project. The difference between an OSS supply chain attack and a traditional supply chain attack (e.g., inserting malware into proprietary software) is that the organization using OSS has access to its entire code at the outset and throughout its use (and can therefore examine it for vulnerabilities or otherwise have greater insight into how it functions when used maliciously). While some organizations may have the resources and wherewithal to leverage this as a security advantage, many will not.


A Measure of Motive: How Attackers Weaponize Digital Analytics Tools

IP geolocation utilities can be used legitimately by advertisers and marketers to gauge the geo-dispersed impact of advertising reach and the effectiveness of marketing funnels (albeit with varying levels of granularity and data availability). However, Mandiant has observed IP geolocation utilities used by attackers. Some real-world attack patterns that Mandiant has observed leveraging IP geolocation utilities include:Malware payloads connecting to geolocation services for infection tracking purposes upon successful host compromise, such as with the Kraken Ransomware. This allows attackers a window into how fast and how far their campaign is spreading. Malware conditionally performing malicious actions based on IP geolocation data. This functionality allows attackers a level of control around their window of vulnerability and ensures they do not engage in “friendly fire” if their motivations are geo-political in nature, such as indiscriminate nation-state targeting by hacktivists. An example of this technique can be seen in the case of the TURKEYDROP variant of the Adwind malware, which attempts to surgically target systems located in Turkey.


AI development and agile don't mix well

Interestingly, several AI specialists see formal agile software development practices as a roadblock to successful AI. ... "While the agile software movement never intended to develop rigid processes -- one of its primary tenets is that individuals and interactions are much more important than processes and tools -- many organizations require their engineering teams to universally follow the same agile processes." ... The report suggested: "Stakeholders don't like it when you say, 'it's taking longer than expected; I'll get back to you in two weeks.' They are curious. Open communication builds trust between the business stakeholders and the technical team and increases the likelihood that the project will ultimately be successful."Therefore, AI developers must ensure technical staff understand the project purpose and domain context: "Misunderstandings and miscommunications about the intent and purpose of the project are the most common reasons for AI project failure. Ensuring effective interactions between the technologists and the business experts can be the difference between success and failure for an AI project."


A quantum neural network can see optical illusions like humans do. Could it be the future of AI?

When we see an optical illusion with two possible interpretations (like the ambiguous cube or the vase and faces), researchers believe we temporarily hold both interpretations at the same time, until our brains decide which picture should be seen. This situation resembles the quantum-mechanical thought experiment of Schrödinger’s cat. This famous scenario describes a cat in a box whose life depends on the decay of a quantum particle. According to quantum mechanics, the particle can be in two different states at the same time until we observe it – and so the cat can likewise simultaneously be alive and dead. I trained my quantum-tunnelling neural network to recognise the Necker cube and Rubin’s vase illusions. When faced with the illusion as an input, it produced an output of one or the other of the two interpretations. Over time, which interpretation it chose oscillated back and forth. Traditional neural networks also produce this behaviour, but in addition my network produced some ambiguous results hovering between the two certain outputs – much like our own brains can hold both interpretations together before settling on one.


How To Channel Anger As An Emotional Intelligence Strategy

If you want to use anger in a constructive way, you first have to break the mental stigma that “Anger is bad.” Anger, like all emotions, is an instinctual response. Rather than label this response as good or bad, it’s more useful to think of it simply as data. Your emotions offer you data, and you can harness that data in a number of ways. ... The second half of the battle is to learn to use your anger with intent. To do so, you have to understand the potential for anger to hijack your behavior. “[Anger] can also be a negative,” Scherzer warned in his same interview. “It has been [for me] in the past, where you almost get too much adrenaline, too much emotion, and you aren’t thinking clearly.” In other words, Scherzer doesn’t just dial in anger and then see what happens. He channels it with purpose. Even though he may appear intense or even hotheaded, his intent is strong. And that intent is what enables him to harness his anger in a constructive way. ... Since this is a more advanced emotional intelligence strategy, there are a couple of things you should keep top of mind. First, if you’re the kind of person whose anger frequently gets in your way, you should likely focus your time on management strategies, not this one. Second, you should start by applying this strategy in a lower-stakes situation.


How to Improve Your Leadership Style With Cohort-Based Leadership Training

Cohort-based learning is rooted in Albert Bandura's social learning theory. Social interaction improves learning because humans are social creatures by nature. Hence, we enjoy learning more from interactive, multimedia methods than passive ones that lack feedback or immediate results. Perspective-taking and mentalizing in cohorts promote empathy and communication skills, while emotional resonance and dialogue deepen understanding for all involved. The accountability that forms in groups encourages commitment and performance. Community-based learning, feedback, emotional support and real-world application ignite individual and collective learning. ... The structured curriculum is designed to cover various aspects of leadership, building upon previous sessions to provide a comprehensive learning journey. Practical tools, measurements and models are provided to apply directly to the work environment. Real-time feedback and consulting during group sessions help participants tackle specific workplace challenges, allowing for continuous learning, application and feedback to support their development.



Quote for the day:

“A bend in the road is not the end of the road unless you fail to make the turn.” -- Helen Keller