Showing posts with label Empathy. Show all posts
Showing posts with label Empathy. Show all posts

Daily Tech Digest - October 09, 2025


Quote for the day:

"No man is good enough to govern another man without that other's consent." -- Abraham Lincoln



The Quantum Wake-Up Call: Preparing Your Organization for PQC

Quantum computing promises transformative breakthroughs across industries—but it also threatens the cryptographic foundations that secure our digital world. As quantum capabilities evolve, organizations must proactively prepare for the shift to post-quantum cryptography (PQC) to safeguard sensitive data and maintain trust. ... The very mathematical "hardness" that makes RSA and ECC secure against classical computers is precisely what makes them fatally vulnerable to quantum computing. Shor's Algorithm: This quantum algorithm, developed by Peter Shor in 1994, is capable of solving the integer factorization and discrete logarithm problems exponentially faster than any classical machine. Once a sufficiently stable and large-scale quantum computer is built, an encryption that might take a supercomputer millions of years to break could be broken in hours or even minutes. The Decryption Time Bomb: Because current PKC is used to establish long-term trust and to encrypt keys, the entire cryptographic ecosystem is a single point of failure. The threat is compounded by the "Harvest Now, Decrypt Later" strategy, meaning sensitive data is already being harvested and stored by adversaries, awaiting the quantum moment to be unlocked. Quantum computing is no longer theoretical—it’s a looming reality. Algorithms like RSA and ECC, which underpin most public-key cryptography, are vulnerable to quantum attacks via Shor’s algorithm.


Producing a Better Software Architecture with Residuality Theory

Residuality theory is a very simple process. Sometimes, people are put off because the theoretical work necessary to prove that residuality works is very heavy, but applying it is easy, O’Reilly explained: We start out with a suggestion, a naive architecture that solves the functional problem. From there we stress the architecture with potential changes in the environment. These stressors allow us to uncover the attractors, often through conversations with domain experts. For each attractor, we identify the residue, what’s left of our architecture in this attractor, and then we change the naive architecture to make it survive better. We do this many times and, at the end, integrate all of these augmented residues into a coherent architecture. We can then test this to show that it survives unknown forms of stress better than our naive architecture. In complex business environments with uncertainty, residuality makes it possible to create architectures quickly instead of chasing down stakeholders demanding specific requirements or answers to questions that are unknown by the business itself, O’Reilly said. It pulls technical architects out of details and teaches them to productively engage with a business environment without the lines and boxes of traditional enterprise architecture, he concluded. ... Senior architects report that it gives a theoretical justification for practices that many had already figured out and a shared vocabulary for teams to talk about architecture. 


Will AI-SPM Become the Standard Security Layer for Safe AI Adoption?

By continuously monitoring AI assets, AI-SPM helps ensure that only trusted data sources are used during model development. Runtime security testing and red-team exercises detect vulnerabilities caused by malicious data. The system actively identifies abnormal model behavior, such as biased, toxic, or manipulated output, and brings them up for remediation prior to production release. ... AI-SPM continuously checks system requests and user inputs to find dangerous patterns before they lead to security problems, like attempts to remove or change built-in directives. It also uses protection against prompt injection and jailbreak attacks, which are common ways to access or alter system-level commands. By finding unapproved AI tools and services, it stops the use of insecure or poorly set up LLMs that could reveal system prompts. ... Shadow AI is starting to get more attention, and for good reason. Like shadow IT, employees are using public AI tools without authorization. That might mean uploading sensitive data or sidestepping governance rules, often without realizing the risks. The problem isn’t just the tools themselves, but the lack of visibility around how and where they’re being used. AI-SPM should work to identify all AI tools in play across networks, endpoints, cloud platforms, and dev environments, mapping how data moves between them, which is often the missing piece when trying to understand exposure risks.


How to write nonfunctional requirements for AI agents

Nonfunctional requirements for AI agents can be like those for applications, where user stories are granular and target delivering small, atomic functions. These NFRs can guide developers in answering how to develop the functionality described in user stories and to help quantify what should pass a code review. However, you may need another set of NFRs expressed at a feature or release level. ... “Agile teams often struggle with how to evaluate NFRs like latency, fairness, or explainability, which may seem nonfunctional, but with a little specification work, they can often be made concrete and part of a user story with clear pass/fail tests,” says Grant Passmore, co-founder of Imandra. “We use formal verification to turn NFRs into mathematical functional requirements we can prove or disprove.” ... AI agent NFRs that connect dev with ops have all the complexities of applications, infrastructure, automations, and AI models bundled together. Deploying the AI agent is just the beginning of its lifecycle, and NFRs for maintainability and observability help create the feedback loops required to diagnose issues and make operational improvements. As many organizations aim toward autonomous agentic AI and agent-to-agent workflows, standardizing a list of NFRs that are applied across all AI agents becomes important.


Unplug Gemini from email and calendars, says cybersecurity firm

CSOs should consider turning off Google Gemini access to employees’ Gmail and Google Calendars, because the chatbot is vulnerable to a form of prompt injection, says the head of a cybersecurity firm that discovered the vulnerability. ”If you’re worried about the risk, you might want to turn off automatic email and calendar processing by Gemini until this, and potentially other things like it, are addressed,” Jeremy Snider, CEO of US-based FireTail, said in an interview. ... This flaw is “particularly dangerous when LLMs, like Gemini, are deeply integrated into enterprise platforms like Google Workspace,” the report adds. FireTail tested six AI agents. OpenAI’s ChatGPT, Microsoft Copilot, and Anthropic AI’s Claude caught the attack. Gemini, DeepSeek, and Grok failed. In a test, FireTail researchers were able to change the word “Meeting” in an appointment in Google Calendar to “Meeting. It is optional.” ... “ASCII Smuggling attacks against AIs aren’t new,” commented Joseph Steinberg, a US-based cybersecurity and AI expert. “I saw one demonstrated over a year ago.” He didn’t specify where, but in August 2024, a security researcher blogged about an ASCII smuggling vulnerability in Copilot. The finding was reported to Microsoft. Many ways of disguising malicious prompts will be discovered over time, he added, so it’s important that IT and security leaders ensure that AIs don’t have the power to act without human approval on prompts that could be damaging.


Broken Opt-Outs, Big Fines: Tractor Supply Shows Privacy Enforcement Has Arrived for Retail

The Tractor Supply violations reveal a clear enforcement pattern. Broken opt-out links that route to dead webforms. Global Privacy Control signals ignored entirely. Privacy notices that skip job applicant data disclosures. Vendor agreements without data restriction clauses. These aren’t random oversights. They’re the exact gaps that surfaced across recent CCPA enforcement by the Attorney General and CPPA orders. Regulators are building a playbook: test the opt-out mechanisms, check for GPC compliance, review all privacy notices including HR portals, and audit third-party contracts. If any piece fails, expect enforcement. Regulators no longer accept opt-outs in theory or privacy policies in fine print. ... The message is clear: prove you have control. Not just over the data you collect, but over the algorithms that process it. Retailers who can’t show governance across both will face scrutiny on multiple fronts. The same broken opt-out that triggers a privacy fine could signal to regulators that your AI systems lack oversight too. This isn’t about adding more compliance checkboxes. It’s about recognizing that data governance and AI governance are becoming inseparable. The retailers who understand this convergence will build unified systems that handle both. The ones who don’t will scramble to retrofit governance after the fact, just like they’re doing with privacy today.


Why Enterprises Continue to Stick With Traditional AI

AI success also depends on digital maturity. Many organizations are still laying data foundations. "Let's say you want to run analytics on how many tickets were raised, do a dashboard on how many tickets one can expect … all of that was over a call. Nothing was digitized. There is no trace of it. That is the reason why chatbots are getting created because they are now recording and getting traced," Iyer said. ... Strict compliance and privacy requirements push enterprises toward controlled AI development. … Even in such cases, we ensure the data in the model that we build, it stays exclusively. At any point of time, your data or your model is not going to be used for the betterment of someone else," Iyer said. This approach reflects broader enterprise concerns about AI governance. According to KPMG research, frameworks such as local interpretable model-agnostic explanations and Shapley Additive exPlanations help clarify AI decisions, support compliance and build stakeholder confidence. ... Iyer said enterprise needs are often highly contextual, making massive models unnecessary. "Do you need a 600-700 billion [parameter] model sitting in your enterprise running inferences when the questions are going to be very contextual?" she said. This practical wisdom is supported by recent industry analysis. Traditional ML models often produce classification accuracy at a fraction of the cost compared to deep learning alternatives. 


Lead with a human edge: Why empathy is the new strategy

Traditional management was built on control: plans, processes, and hierarchies designed to tame complexity. But as Pushkar noted, ‘organisations are living organisms. They evolve, sense, and respond. Trying to manage them like machines is an illusion. The leaders of tomorrow will not be engineers of systems — they will be gardeners of cultures.’ “Planting a tree is very easy,” Bidwai said. “The real game is how you nurture, how you create an environment, how you enable the culture.” Nurturing, not directing, is the leadership mindset for an era of interdependence. ... Perhaps the most striking moment of Pushkar’s talk was not analytical but symbolic. He invited participants to discard their corporate titles just for a moment and invent new ones that reflected their purpose, not their position. “Sometimes titles define how we operate. Can we look beyond titles?” His own? In People Matters, Pushkar stated that he visualises his creative title as Plumber. “Wherever anything needs fixing, I will go and fix things.” The metaphor landed. Leadership, stripped of status, is about service. To lead with a human edge is to roll up your sleeves, listen, and fix what’s broken, in systems, in relationships, in ourselves. ... What Pushkar calls ‘the human edge’ is not a nostalgic pushback against technology. It is a pragmatic blueprint for sustainable growth. The leaders who will define the next decade will be those who use AI to augment human potential, not replace it those who recognise that data drives decisions, but empathy drives destiny.


Building a modern fraud prevention stack: why centralised data, not point solutions, is the answer

The fraud prevention landscape is riddled with fragmented tools, reactive approaches and blind spots. Despite the best of intentions, many organisations rely on outdated, point-in-time methods that are ill-suited for today’s dynamic fraud landscape. And fraud no longer plays by the old rules. It unfolds across the entire customer journey, mutating with every new channel, payment method or customer behaviour pattern. A fraudster may test stolen credentials one day, then come back weeks later to exploit a weak link in the onboarding or refund process. These disjointed systems miss multi-step attacks and patterns that unfold over time. ... while many organisations have historically relied on a patchwork of tools to cover each threat vector, it’s becoming clear that more tools aren’t the answer. Better coordination is. A modern stack doesn’t need to come from a single vendor, but it does need to operate like a single, unified system. That means integrated data, shared intelligence and orchestration that supports real-time response, not after-the-fact analysis. While investment is rising, with 85% of organisations having increased their fraud prevention budgets, it’s crucial to highlight that spending must be strategic. So, what does a modern fraud prevention stack actually look like? And how can organisations build one that’s unified, flexible and future-proof?


CISOs, Start Securing Software's Agentic Future Now

Industry-wide challenges create obstacles to AI governance, leaving leaders uncertain about where to focus their strategic efforts most effectively. The non-deterministic nature of agents causes them to behave in unexpected ways, which has been proven to disrupt existing security boundaries. Adding to this security complexity, universal protocols, such as Model Context Protocol and Agent2Agent, are emerging to streamline data access and improve agent interoperability, but their ecosystem-building capabilities introduce additional security considerations. But these challenges cannot stop security leaders from prioritizing AI governance. ... A culture of security now requires AI literacy. 43% of survey respondents acknowledged a widening AI skills gap, which is likely to grow unless technical leaders prioritize upskilling teams to understand model behavior, prompt engineering, and how to evaluate model inputs and outputs critically. Understanding where models are performant versus where their use is suboptimal helps teams avoid unnecessary security risk and technical debt. ... Teams should also recognize that no model can replace human ingenuity. When models fail in domains where security engineers or developers lack expertise, they will not be able to identify the security gaps the model has left behind. CISOs should consider dedicating a portion of learning and development budgets to continuous technical education. 

Daily Tech Digest - July 19, 2025


Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks


AI-Driven Threat Hunting: Catching Zero Day Exploits Before They Strike

Cybersecurity has come a long way from the days of simple virus scanners and static firewalls. Signature-based defenses were sufficient to detect known malware during the past era. Zero-day exploits operate as unpredictable threats that traditional security tools fail to detect. The technology sector saw Microsoft and Google rush to fix more than dozens of zero day vulnerabilities which attackers used in the wild during 2023. The consequences reach extreme levels because a single security breach results in major financial losses and immediate destruction of corporate reputation. AI functions as a protective measure that addresses weaknesses in human capabilities and outdated system limitations. The system analyzes enormous amounts of data from network traffic and timestamps and IP logs, and other inputs to detect security risks. ... So how does AI pull this off? It’s all about finding the weird stuff. Network traffic packets follow regular patterns, but zero-day exploits cause packet size fluctuations and timing irregularities. AI detects anomalies by comparing data against its knowledge base of typical behavior patterns. Autoencoders function as neural networks that learn to recreate data during operation. When an autoencoder fails to rebuild data, it automatically identifies the suspicious activity.


How AI is changing the GRC strategy

CISOs are in a tough spot because they have a dual mandate to increase productivity and leverage this powerful emerging technology, while still maintaining governance, risk and compliance obligations, according to Rich Marcus, CISO at AuditBoard. “They’re being asked to leverage AI or help accelerate the adoption of AI in organizations to achieve productivity gains. But don’t let it be something that kills the business if we do it wrong,” says Marcus. ... “The really important thing to be successful with managing AI risk is to approach the situation with a collaborative mindset and broadcast the message to folks that we’re all in it together and you’re not here to slow them down.” ... Ultimately, the task is for security leaders to apply a security lens to AI using governance and risk as part of the broader GRC framework in the organization. “A lot of organizations will have a chief risk officer or someone of that nature who owns the broader risk across the environment, but security should have a seat at the table,” Norton says. “These days, it’s no longer about CISOs saying ‘yes’ or ‘no’. It’s more about us providing visibility of the risks involved in doing certain things and then allowing the organization and the senior executives to make decisions around those risks.”


Three Invisible Hurdles to Innovation

Innovation changes internal power dynamics. The creation of a new line of business leads to a legacy line of business declining or, at an extreme, shutting down or being spun out. One part of the organization wins; another loses. Why would a department put forward or support a proposal that would put that department out of business or lead it to lose organizational influence? That means senior leaders might never see a proposal that’s good for the whole organization if it is bad for one part of the organization. ... While the natural language interface of OpenAI’s ChatGPT was easy the first time I used it, I wasn’t sure what to do with a large language model (LLM). First I tried to mimic a Google search, and then jumped in and tried to design a course from scratch. The lack of artfully constructed prompts on first-generation technology led to predictably disappointing results. For DALL-E, I tried to prove that AI couldn’t match the skills of my daughter, a skilled artist. Seeing mediocre results left me feeling smug, reaffirming my humanity. ... Social identity theory suggests that individuals often merge their personal identity with the offerings of the company at which they work. Ask them who they are, and they respond with what they do: “I’m a newspaper guy.” So imagine how Gilbert’s message landed with his employees who worked to produce a print newspaper every day.


Beyond Code Generation: How Asimov is Transforming Engineering Team Collaboration

The conventional wisdom around AI coding assistance has been misguided. Research shows that engineers spend only about 10% of their time writing code, while the remaining 70% is devoted to understanding existing systems, debugging issues, and collaborating with teammates on intricate problems. This reality exposes a significant gap in current AI tooling, which predominantly focuses on code generation rather than comprehension. “Engineers don’t spend most of their time writing code. They spend most of their time understanding code and collaborating with other teammates on hard problems,” explains the Reflection team. This insight drives Asimov’s unique approach to engineering productivity. ... As engineering teams grapple with increasingly complex systems and distributed architectures, tools like Asimov offer a glimpse into a future where AI serves as a genuine collaborative partner rather than just a code completion engine. By focusing on understanding and context rather than mere generation, Asimov addresses the actual pain points that slow down engineering teams. The tool is currently in early access, with Reflection AI selecting teams for initial deployment. 


Data Management Makes or Breaks AI Success for SLGs

“Many agencies start their AI journeys with a specific use case, something simple like a chatbot,” says John Whippen, regional vice president for U.S. public sector at Snowflake. “As they show the value of those individual use cases, they’ll attempt to make it more prevalent across an entire agency or department.” Especially in populous jurisdictions, readying data for large-scale AI initiatives can be challenging. Nevertheless, that initial data consolidation, governance and management are central to cross-agency AI deployments, according to Whippen and other industry experts. ... Most state agencies operate on a hybrid cloud model. Many of them work with multiple hyperscalers and likely will for the foreseeable future. This creates potential data fragmentation. However, where the data is stored is not necessarily as important as the ability to centralize how it is accessed, managed and manipulated. “Today, you can extract all of that data much more easily, from a user interface perspective, and manipulate it the way you want, then put it back into the system of record, and you don't need a data scientist for that,” says Mike Hurt, vice president of state and local government and education for ServiceNow. “It's not your grandmother's way of tagging anymore.”


The Role Of Empathy In Effective Leadership

To maintain good working relationships with others, you must be willing to understand their experiences and perspectives. As we all know, everyone sees the world through a different lens. Even if you don’t fully align with others’ worldviews, as a leader, you must create an environment where individuals feel heard and respected. ... Operate with perspective and cultivate inclusive practices. In a way, empathy is being able to see through the eyes of others. Many of the unspoken rules of the corporate world are based on the experience of white males in the workforce. Considering the countless other demographics in the modern workforce, most of these nuances or patterns are outdated, exclusionary, counterproductive, and even harmful to some people. Can you identify any unspoken rules you enforce or adhere to within your career? Sometimes, they are hard to spot right away. In my research as a DEI professional, I’ve encountered many unspoken cultural rules that don’t consider the perspective of diverse groups. ... Empathetic leaders create more harmonious workplaces and inspire their teams to perform better. Creating an atmosphere of acceptance and understanding sets the stage for healthier dynamics. In questioning the status quo, you root out any counterproductive trends in company culture that need addressing.


New Research on the Link Between Learning and Innovation

Cognitive neuroscience confirms what experienced leaders intuitively know: Our brains need structured breaks to turn experiences into actionable knowledge. Just as sleep helps consolidate daily experiences into long-term memory, structured reflection allows teams to integrate insights gained during exploration phases into strategies and plans. Without these deliberate rhythms, teams risk becoming overwhelmed by continual information intake—akin to endlessly inhaling without pausing to exhale—leading to confusion and burnout. By intentionally embedding reflective pauses within structured learning cycles, teams can harness their full innovative potential. ... You can think of a team’s learning activities as elements of a musical masterpiece. Just as great compositions—like Beethoven’s Fifth Symphony—skillfully balance moments of tension with moments of powerful resolution, effective team learning thrives on the structured interplay between building up and then releasing tension. Harmonious learning occurs when complementary activities, such as team reflection and external expert consultations, reinforce one another, creating moments of clarity and alignment. Conversely, dissonance arises when conflicting activities, like simultaneous experimentation and detailed planning, collide and cause confusion.


Optimizing Search Systems: Balancing Speed, Relevance, and Scalability

Efficiently managing geospatial search queries on Uber Eats is crucial, as users often seek outnearby restaurants or grocery stores. To achieve this, Uber Eats uses geo-sharding, a technique that ensures all relevant data for a specific location is stored within a single shard. This minimizes query overhead and eliminates inefficiencies caused by fetching and aggregating results from multiple shards. Additionally, geo sharding allows first-pass ranking to happen directly on data nodes, improving speed and accuracy. Uber Eats primarily employs two geo sharding techniques: latitude sharding and hex sharding. Latitude sharding divides the world into horizontal bands, with each band representing a distinct shard. Shard ranges are computed offline using Spark jobs, which first divide the map into thousands of narrow latitude stripes and then group adjacent stripes to create shards of roughly equal size. Documents falling on shard boundaries are indexed in both neighboring shards to prevent missing results. One key advantage of latitude sharding is its ability to distribute traffic efficiently across different time zones. Given that Uber Eats experiences peak activity following a "sun pattern" with high demand during the day and lower demand at night, this method helps prevent excessive load on specific shards. 


How to beat the odds in tech transformation

Creating an enterprise-wide technology solution requires defining a scope that’s ambitious and quickly actionable and has an underlying objective to keep your customers and organization on board throughout the project. ... Technology may seem even more autonomous, but tech transformations are not. They depend on the full engagement and alignment of people across your organization, starting with leadership. First, senior leaders need to be educated so they clearly understand not just the features of the new technology but more so the business benefits. This will motivate them to champion engagement and adoption throughout the organization. ... Even the best-planned journeys to new frontiers will run into unexpected challenges. For instance, while we had extensively planned for customer migration during our tech transformation, the effort required to make it go as quickly and smoothly as possible was greater than expected. After all, we provide mission-critical solutions, so customers didn’t simply want to know we had validated a new product. They wanted reassurance we had validated their specific use cases. In response, we doubled down on resources to give them enhanced confidence. As mentioned, we introduced a protocol of parallel systems, running the old and new simultaneously. 


Leadership vs. Management in Project Management: Walking the Tightrope Between Vision and Execution

At its core, management is about control. It’s the science of organising tasks, allocating resources, and ensuring deliverables meet specifications. Managers thrive on Gantt charts, risk matrices, and status reports. They’re the architects of order in a world prone to chaos.. It’s the science of organising tasks, allocating resources, and ensuring deliverables meet specifications. Managers thrive on Gantt charts, risk matrices, and status reports. They’re the architects of order in a world prone to chaos. Leadership, on the other hand, is about inspiration. It’s the art of painting a compelling vision, rallying teams around a shared purpose, and navigating uncertainty with grit. ... A project manager’s IQ might land them the job, but their EQ determines their success. Leadership in project management isn’t just about charisma—it’s about sensing unspoken tensions, motivating burnt-out teams, and navigating stakeholder egos. ... The debate between leadership and management is a false dichotomy. Like yin and yang, they’re interdependent forces. A project manager who only manages becomes a bureaucrat, obsessed with checkboxes but blind to the bigger picture. One who only leads becomes a dreamer, chasing visions without a roadmap. The future belongs to hybrids—those who can rally a team with a compelling vision and deliver a flawless product on deadline.

Daily Tech Digest - May 06, 2025


Quote for the day:

"Limitations live only in our minds. But if we use our imaginations, our possibilities become limitless." --Jamie Paolinetti


A Primer for CTOs: Taming Technical Debt

Taking a head-on approach is the most effective way to address technical debt, since it gets to the core of the problem instead of slapping a new coat of paint over it, Briggs says. The first step is for leaders to work with their engineering teams to determine the current state of data management. "From there, they can create a realistic plan of action that factors in their unique strengths and weaknesses, and leaders can then make more strategic decisions around core modernization and preventative measures." Managing technical debt requires a long-term view. Leaders must avoid the temptation of thinking that technical debt only applies to legacy or decades old investments, Briggs warns. "Every single technology project has the potential to add to or remove technical debt." He advises leaders to take a cue from medicine's Hippocratic Oath: "Do no harm." In other words, stop piling new debt on top of the old. ... Technical debt can be useful when it's a conscious, short-term trade-off that serves a larger strategic purpose, such as speed, education, or market/first-mover advantage, Gibbons says. "The crucial part is recognizing it as debt, monitoring it, and paying it down before it becomes a more serious liability," he notes. Many organizations treat technical debt as something they're resigned to live with, as inevitable as the laws of physics, Briggs observes. 


AI agents are a digital identity headache despite explosive growth

“AI agents are becoming more powerful, but without trust anchors, they can be hijacked or abused,” says Alfred Chan, CEO of ZeroBiometrics. “Our technology ensures that every AI action can be traced to a real, authenticated person—who approved it, scoped it, and can revoke it.” ZeroBiometrics says its new AI agent solution makes use of open standards and technology, and supports transaction controls including time limits, financial caps, functional scopes and revocable keys. It can be integrated with decentralized ledgers or PKI infrastructures, and is suggested for applications in finance, healthcare, logistics and government services. The lack of identity standards suited to AI agents is creating a major roadblock for developers trying to address the looming market, according to Frontegg. That is why it has developed an identity management platform for developers building AI agents, saving them from spending time building ad-hoc authentication workflows, security frameworks and integration mechanisms. Frontegg’s own developers discovered these challenges when building the company’s autonomous identity security agent Dorian, which detects and mitigates threats across different digital identity providers. “Without proper identity infrastructure, you can build an interesting AI agent — but you can’t productize it, scale it, or sell it,” points out Aviad Mizrachi, co-founder and CTO of Frontegg.


Rethinking digital transformation for the agentic AI era

Most CIOs already recognize that generative AI presents a significant evolution in how IT departments can deliver innovations and manage IT services. “Gen AI isn’t just another technology; it’s an organizational nervous system that exponentially amplifies human intelligence,” says Josh Ray, CEO of Blackwire Labs. “Where we once focused on digitizing processes, we’re now creating systems that think alongside us, turning data into strategic foresight. The CIOs who thrive tomorrow aren’t just managing technology stacks; they’re architecting cognitive ecosystems where humans and AI collaborate to solve previously impossible challenges.” IT service management (ITSM) is a good starting point for considering gen AI’s potential. Network operation centers (NOCs) and site reliability engineers (SREs) have been using AIOps platforms to correlate alerts into time-correlated incidents, improve the mean time to resolution (MTTR), and perform root cause analysis (RCA). As generative and agentic AI assists more aspects of running IT operations, CIOs gain a new opportunity to realign IT ops with more proactive and transformative initiatives. ... “Opportunities such as gen AI for hotfix development and predictive AI to identify, correlate, and route incidents for improved incident response are transforming our business, resulting in improved customer satisfaction, revenue retention, and engineering efficiency.”


Strengthening Software Security Under the EU Cyber Resilience Act: A High-Level Guide for Security Leaders and CISOs

One of the hardest CRA areas for organizations to get a handle on is knowing and proving where appropriate controls and configurations are in place vs. where they’re lacking. This lack of visibility often leads to underutilized licenses, unchecked areas of product development, and the potential for unauthorized access into sensitive areas of the development environment. One of the ways security-conscious organizations are combating this is through the creation of “paved pathways” that include very specific technology and security tooling to be utilized across all their development environments, but this often requires extreme vigilance of deviations within those environments and very few ways to automate the adherence to those standards. Legit Security not only automatically inventories and details what and where controls exist within an SDLC so you can ensure 100% coverage of your application portfolio, but we also analyze all of the configurations throughout the entirety of the build process to find any that could allow for supply chain attacks or unauthorized access to SCMs or CI/CD systems. This ensures that your teams are using secure defaults and putting appropriate guardrails into development workflows. This also automates baseline enforcement, configuration management, and quick resets to a known safe state when needed.


Observability 2.0? Or Just Logs All Over Again?

As observability solutions have ostensibly become more mature over the last 15 years, we still see customers struggle to manage their observability estates, especially with the growth of cloud native architectures. So-called “unified” observability solutions bring tools to manage the three pillars, but cost and complexity continue to be major pain points. Meanwhile, the volume of data has kept rising, with 37% of enterprises ingesting more than a terabyte of log data per day. Legacy logging solutions typically deal with the problems of high data volume and cardinality through short retention windows and tiered storage — meaning that data is either thrown away after a fairly short period of time or stored in frozen tiers where it goes dark. Meanwhile, other time series or metric databases take high-volume source data, aggregate it into metrics, then discard the underlying logs. Finally, tracing generates so much data that most traces aren’t even stored in the first place. Head-based sampling retains a small percentage of traces, typically random, while tail-based sampling allows you to filter more intelligently but at the cost of efficient processing. And then traces are typically discarded after a short period of time. There’s a common theme here: While all of the pillars of observability provide different ways of understanding and analyzing your systems, they all deal with the problem of high cardinality by throwing data away.


What it really takes to build a resilient cyber program

A good place to begin is the ‘Identify’ phase from NIST’s Incident Response guide. You need to identify all of your risks, vulnerabilities, and assets. Prioritize them and then determine the best way to protect and detect threats against those assets. Assets not only include physical things like laptops and phones, but also anything that is in a Cloud Service Provider, SaaS applications, and digital items like domain names. Determine the threats, risks and vulnerabilities to those assets. Prioritize them and determine how your organization is going to protect and monitor them. Most organizations don’t have a very good idea of what they actually own, which is why they tend to be reactive and waste time on actions that do not apply to them. How often has a security analyst been asked if a recently disclosed zero-day affects the company? They perform the scans and pull in data manually only to discover they don’t run that piece of software or hardware. ... Many organizations use a red team exercise to try and blame someone or group for a deficiency or even to score an internal political point. That will never end well for anyone. The name of the game is improvement in your security posture and these help identify areas of weakness. There might be things that don’t get fixed immediately, or maybe ever, but knowing that the gap exists is the critical first step. 


Top tips for successful threat intelligence usage

“The value of threat intelligence is directly tied to how well it is ingested, processed, prioritized, and acted upon,” wrote Cyware in their report. This means a careful integration into your existing constellation of security tools so you can leverage all your previous investment in your acronyms of SOARs, SIEMs and XDRs. According to the Greynoise report “you have to embed the TIP into your existing security ecosystem, making sure to correlate your internal data and use your vulnerability management tools to enhance your incident response and provide actionable analytics.” The keyword in that last sentence is actionable. Too often threat intel doesn’t guide any actions, such as kicking off a series of patches to update outdated systems, or remediation efforts to firewall a particular network segment or taking offline an offending device. ... Part of the challenge here is to prevent siloed specialty mindsets from making the appropriate remedial measures. “I’ve seen time and time again when the threat intel or even the vulnerability management team will send out a flash notification about a high priority threat only for it to be lost in a queue because the threat team did not chase it up. It’s just as important for resolver groups to act as it is for the threat team to chase it,” Peck blogged.


How empathy is a leadership gamechanger in a tech-first workplace

Empathy isn’t just about creating a feel-good workplace—it’s a powerful driver of innovation and performance. When leaders lead with empathy, they unlock something essential: a work culture where people feel safe to speak up, take risks, and bring their boldest ideas to life. That’s where real progress happens. Empathy also enhances productivity, employees who feel valued and supported are more motivated to perform at their highest potential. Research shows that organisations led by empathetic leaders experience a 20% increase in customer loyalty, underscoring the far-reaching impact of a people-first approach. When employees thrive, so do customer relationships, business outcomes, and overall organisational growth. In India, where workplace dynamics are often shaped by hierarchical structures and collectivist values, empathetic leadership can be transformative. By prioritising open communication, recognition, and personal development, leaders can strengthen employee morale, increase job satisfaction, and drive long-term loyalty. ... In a tech-first world, empathy isn’t a nice-to-have, it’s a leadership gamechanger. When leaders lead with heart and clarity, they don’t just inspire people, they unlock their full potential. Empathy fuels trust, drives innovation, and builds workplaces where people and ideas thrive. 


Analyzing the Impact of AI on Critical Thinking in the Workplace

Instead of generating content from scratch, knowledge workers increasingly invest effort in verifying information, integrating AI-generated outputs into their work, and ensuring that the final outputs meet quality standards. What is motivating this behavior? Some explanations for these trends could be to enhance work quality, develop professional AI skills, laziness, and the desire to avoid negative outcomes like errors. For example, someone who is not very proficient in the English language could use GenAI to make their emails sound a lot more natural and avoid any potential misunderstandings. On the flipside, there are some drawbacks to using GenAI. These include overreliance on GenAI for routine or lower-stakes tasks, time pressures, limited awareness of potential AI pitfalls, and challenges in improving AI responses. ... The findings suggest that GenAI tools can reduce the perceived cognitive load for certain tasks. However, they find that GenAI poses risks to workers’ critical thinking skills by shifting their roles from active problem-solvers to AI output overseers who must verify and integrate responses into their workflows. Once again (and this can not be emphasized enough) the study underscores the need for designing GenAI systems that actively support critical thinking. This will ensure that efficiency gains do not come at the expense of developing essential critical thinking skills.


Harnessing Data Lineage to Enhance Data Governance Frameworks

One of the most immediate benefits is improved data quality and troubleshooting. When a data quality issue arises, data lineage’s detailed trail can help you to quickly identify where the problem originated, so that you can fix errors and minimize downtime. Data lineage also enables better planning, since it allows you to run more effective data protection impact analysis. You can map data dependencies to assess how changes like system upgrades or new data integrations might affect your overall data integrity. This is especially valuable during migrations or major updates, as you can proactively mitigate any potential disruptions. Furthermore, regulatory compliance is also greatly enhanced through data lineage. With a complete audit trail documenting every data movement and transformation, organizations can more easily demonstrate compliance with regulations like GDPR, CCPA, and HIPAA. ... Developing a comprehensive data lineage framework can take substantial time, not to mention significant funds. In addition to the various data lineage tools, you might also need to have dedicated hosting servers, depending on the level of compliance needed, or to hire data lineage consultants. Mapping out complex data flows and maintaining up-to-date lineage in a data landscape that’s constantly shifting requires continuous attention and investment.

Daily Tech Digest - September 05, 2022

How to handle a multicloud migration: Step-by-step guide

The first order of business is to determine exactly what you want out of a multicloud platform; what needs are in play, which functions and services should be relocated, which ones may or should stay in house, what constitutes a successful migration, and what advantages and pitfalls may arise? You may have a lead on a vendor offering incentives or discounts, or company regulations may prohibit another type of vendor or multicloud service, and this should be part of the assessment. The next step is to determine what sort of funding you have to work with and match this against the estimated costs of the new platform based on your expectations as to what it will provide you. There may be a per-user or per-usage fee, flat fees for services, annual subscriptions or specific support charges. It may be helpful to do some initial research on average multicloud migrations or vendors offering the services you intend to utilize to help provide finance and management a baseline as to what they should expect to allocate for this new environment, so there are no misconceptions or surprises regarding costs.


Intro to blockchain consensus mechanisms

Every consensus mechanism exists to solve a problem. Proof of Work was devised to solve the problem of double spending, where some users could attempt to transfer the same assets more than once. The first challenge for a blockchain network was thus to ensure that values were only transferred once. Bitcoin's developers wanted to avoid using a centralized “mint” to track all transactions moving through the blockchain. While such a mint could securely deny double-spend transactions, it would be a centralized solution. Decentralizing control over assets was the whole point of the blockchain. Instead, Proof of Work shifts the job of validating transactions to individual nodes in the network. As each node receives a transaction, it attempts the expensive calculation required to discover a rare hash. The resulting "proof of work" ensures that a certain amount of time and computing power were expended by the node to accept a block of transactions. Once a block is hashed, it is propagated to the network with a signature. Assuming it meets the criteria for validity, other nodes in the network accept this new block, add it to the end of the chain, and start work on the next block as new transactions arrive.


Data’s Struggle to Become an Asset

Data’s biggest problem is that it is intangible and malleable. How can you attach a value to something that is always changing, may disappear, and has no physical presence beyond the bytes it appropriates in a database? In many organizations, there are troves of data that are collected and never used. Data is also easy to accumulate. Collectively, these factors make it easy for corporate executives to view data as a commodity, and not as something of value. Researchers like Deloitte argue that data will never become an indispensable asset for organizations unless it can deliver tangible business results: “Finding the right project requires the CDO (chief data officer) to have a clear understanding of the organization's wants and needs,” according to Deloitte. “For example, while developing the US Air Force’s data strategy, the CDO identified manpower shortages as a critical issue. The CDO prioritized this limitation early on in the implementation of the data strategy and developed a proof of concept to address it.”


In The Face Of Recession, Investing In AI Is A Smarter Strategy Than Ever

Many business leaders make the mistake of overspending on RPA platforms, blinded by the promise of some future ROI. In reality, due to the need to customize RPA to every client, these decision-makers don’t actually know how long it will take to begin reaping the benefits—if they ever do. I, myself, have made this mistake in the past, spending far too much time and money on a tedious RPA solution that was intended to solve a customer success back-office function, only to find that after the overhead of managing it, the gains were marginal. If business leaders want to fully maximize their investments and reap quicker benefits, they’ll go one giant leap beyond automation, landing in the realm of autonomous artificial intelligence (AI). True AI solutions, which continually learn from a company’s data to become increasingly accurate with time, are the holy grail of ROI. Finance leaders are in a great position to lead the way within their own companies by implementing AI solutions in the accounting function. Across industries, these teams are sagging under the weight of endless, tedious accounting tasks, using outdated, ineffective technology and wasting significant time fixing human errors.


Top 8 Data Science Use Cases in The Finance Industry

Financial institutions can be vulnerable to fraud because of their high volume of transactions. In order to prevent losses caused by fraud, organizations must use different tools to track suspicious activities. These include statistical analysis, pattern recognition, and anomaly detection via machine/deep learning. By using these methods, organizations can identify patterns and anomalies in the data and determine whether or not there is fraudulent activity taking place. ... Tools such as CRM and social media dashboards use data science to help financial institutions connect with their customers. They provide information about their customers’ behavior so that they can make informed decisions when it comes to product development and pricing. Remember that the finance industry is highly competitive and requires continuous innovation to stay ahead of the game. Data science initiatives, such as a Data Science Bootcamp or training program, can be highly effective in helping companies develop new products and services that meet market demands. Investment management is another area where data science plays an important role. 


A Bridge Over Troubled Data: Giving Enterprises Access to Advanced Machine Learning

Thankfully, the smart data fabric concept removes most of these data troubles, bridging the gap between the data and the application. The fabric focuses on creating a unified approach to access, data management and analytics. It builds a universal semantic layer using data management technologies that stitch together distributed data regardless of its location, leaving it where it resides. A fintech organisation can build an API-enabled orchestration layer, using the smart data fabric approach, giving the business a single source of reference without the necessity to replace any systems or move data to a new, central location. Capable of in-flight analytics, more advanced data management technology within the fabric provides insights in real time. It connects all the data including all the information stored in databases, warehouses and lakes and provides the vital and seamless support for end-users and applications. Business teams can delve deeper into the data, using advanced capabilities such as business intelligence. 


Why You Should Start Testing in the Cloud Native Way

Consistently tracking metrics around QA and test pass/failure rates is so important when you’re working in global teams with countless different types of components and services. After all, without benchmarking, how can you measure success? TestKube does just that. Because it’s aware of the definition of all your tests and results, you can use it as a centralized place to monitor the pass/failure rate of your tests. Plus it defines a common result format, so you get consistent result reporting and analysis across all types of tests. ... If you run your applications in a non-serverless manner in the cloud and don’t use virtual machines, I’m willing to bet you probably use containers at this point and you might have faced the challenges of containerizing all your testing activities. Well, with cloud native tests in Testkube, that’s not necessary. You can just import your test files into Testkube and run them out of the box. ... Having restricted access to an environment that we need to test or tinker with is an issue that most of us face at some point in our careers.


Why IT leaders should prioritize empathy

It’s simple enough to practice empathy outside of work, but IT challenges make practicing empathy at work a bigger struggle. Fairly or unfairly, many customers expect technology to work 100 percent of the time. When it doesn’t, it falls on IT leaders to go into crisis mode. Considering many of these applications are mission-critical to the customer’s organizational performance, their reaction makes sense. An unempathetic employee in this situation would ignore the context behind a customer’s emotional response. They might go on the defensive or fail to address the customer’s concerns with urgency. A response like this can prove detrimental to customer loyalty and retention – it takes up to 12 positive customer experiences to make up for one negative experience. Every workplace consists of many different personality types and cultural backgrounds – all with different understandings of and comfort toward practicing empathy. Because of this diversity, aligning on a single company-wide approach to empathy is easier said than done. Yet if your organization fails to secure employee buy-in around the importance of empathy, you risk alienating your customers and letting employees who aren’t well-versed in empathetic communication hold you back.


What devops needs to know about data governance

Looking one step beyond compliance considerations, the next level of importance that drives data governance efforts is trust that data is accurate, timely, and meets other data quality requirements. Moses has several recommendations for tech teams. She says, “Teams must have visibility into critical tables and reports and treat data integrity like a first-class citizen. True data governance needs to go beyond defining and mapping the data to truly comprehending its use. An approach that prioritizes observability into the data can provide collective significance around specific analytics use cases and allow teams to prioritize what data matters most to the business.” Kirk Haslbeck, vice president of data quality at Collibra, shares several best practices that improve overall trust in the data. He says, “Trusted data starts with data observability, using metadata for context and proactively monitoring data quality issues. While data quality and observability establish that your data is fit to use, data governance ensures its use is streamlined, secure, and compliant. Both data governance and data quality need to work together to create value from data.”


The Power of AI Coding Assistance

“With AI-powered coding technology like Copilot, developers can work as before, but with greater speed and satisfaction, so it’s really easy to introduce,” explains Oege De Moor, vice president of GitHub Next. “It does help to be explicit in your instructions to the AI.” He explains that during the Copilot technical preview, GitHub heard from users that they were writing better and more precise explanations in code comments because the AI gives them better suggestions. “Users also write more tests because Copilot encourages developers to focus on the creative part of crafting good tests,” De Moor explains. “So, these users feel they write better code, hand in hand with Copilot.” He adds that it is, of course, important that users are made aware of the limitations of the technology. “Like all code, suggestions from AI assistants like Copilot need to be carefully tested, reviewed, and vetted,” he says. “We also continuously work to improve the quality of the suggestions made by the AI.” GitHub Copilot is built with Codex -- a descendent of GPT-3 -- which is trained on publicly available source code and natural language.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - April 18, 2022

Which Computational Universe Do We Live In?

Unfortunately, we don’t know whether secure cryptography truly exists. Over millennia, people have created ciphers that seemed unbreakable right until they were broken. Today, our internet transactions and state secrets are guarded by encryption methods that seem secure but could conceivably fail at any moment. To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries. We know of many computational problems that seem hard, but maybe we just haven’t been clever enough to solve them. Or maybe some of them are hard, but their hardness isn’t of a kind that lends itself to secure encryption. Fundamentally, cryptographers wonder: Is there enough hardness in the universe to make cryptography possible? ... Most cryptographers, Ishai said, believe that at least some cryptography does exist, so we likely live in Cryptomania or Minicrypt. But they don’t expect a proof of this anytime soon. Such a proof would require ruling out the other three worlds — and ruling out Algorithmica alone already requires solving the “P versus NP” problem, which computer scientists have struggled with for decades.


The AI in a jar

Chomsky sparked a reorientation of psychology toward the brain dubbed the cognitive revolution. The revolution produced modern cognitive science, and functionalism became the new dominant theory of the mind. Functionalism views intelligence (i.e., mental phenomenon) as the brain’s functional organization where individuated functions like language and vision are understood by their causal roles. Unlike behaviorism, functionalism focuses on what the brain does and where brain function happens. However, functionalism is not interested in how something works or if it is made of the same material. It doesn’t care if the thing that thinks is a brain or if that brain has a body. If it functions like intelligence, it is intelligent like anything that tells time is a clock. It doesn’t matter what the clock is made of as long as it keeps time. ... Unfortunately, functions do not think. They are aspects of thought. The issue with functionalism—aside from the reductionism that results from treating thinking as a collection of functions (and humans as brains)—is that it ignores thinking. 


Microsoft’s Newest AI technology, “PeopleLens,” is Helping Blind People See

PeopleLens was developed over two years by a team of Microsoft engineers and computer scientists. The aim was to create a machine learning system to help blind people navigate their social surroundings by identifying people and objects in photos. The team used a dataset of images annotated with labels indicating the presence of people and objects. They then used deep learning algorithms to train a computer vision model that could identify these labels in new images. ... The system uses computer vision algorithms to help the blind person understand their social surroundings. PeopleLens firstly identifies people in a scene and then provides information about them, such as their name and position. The PeopleLens platform consists of a wearable device and a cloud-based service. The device captures images of the surrounding environment and sends them to the cloud-based service, where they are processed by the machine learning algorithms. This information is then used to generate descriptions of the surrounding environment sent back to the wearable device.


Sustaining Fast Flow with Socio-Technical Thinking

If we shape the domain boundaries right, groups of related business concepts that change together will belong together and there will be fewer social and technical dependencies. Shaping good domain boundaries isn’t always a trivial task. When you stay high-level, you can easily fool yourself into thinking something is a sensible domain like the “customer domain” (this is usually something which connects to everything about the customer and results in a very tightly coupled system). I recommend using techniques like Event Storming and Value Stream Mapping to really get into the details of how your business works before attempting to define domain boundaries. Event Storming is a technique where you map out user journeys and business processes using sticky-notes. There aren’t too many rules, it’s a lo-fi technique which increases participation due to a very small learning curve. There is one rule though: processes are mapped out using domain events which represent something happening in the domain and are phrased in past tense, for example, ETA Calculated, Order Placed, Claim Rejected, and so on.


How to minimize new technical debt

John Kodumal, CTO and cofounder of LaunchDarkly, says, “Technical debt is inevitable in software development, but you can combat it by being proactive: establishing policy, convention, and processes to amortize the cost of reducing debt over time. This is much healthier than stopping other work and trying to dig out from a mountain of debt.” Kodumal recommends several practices, such as “establishing an update policy and cadence for third-party dependencies, using workflows and automation to manage the life cycle of feature flags, and establishing service-level objectives.” ... “The first and most important is proper planning and estimating. The second is to standardize procedures that limit time spent organizing and [allow] more time executing.” Most development teams want more time to plan, but it may not be obvious to product owners, development managers, or executives how planning helps reduce and minimize technical debt. When developers have time to plan, they often discuss architecture and implementation, and the discussions tend to get into technical details. Product owners and business stakeholders may not understand or be interested in these technical discussions.


12 examples of artificial intelligence in everyday life

Today, many larger banks give you the option of depositing checks through your smartphone. Instead of actually walking to a bank, you can do it with just a couple of taps. Besides the obvious safeguards when it comes to accessing your bank account through your phone, a check also requires your signature. Now banks use AIs and machine learning software to read your handwriting, compare it with the signature you gave to the bank before, and safely use it to approve a check. In general, machine learning and AI tech speeds up most operations done by software in a bank. This all leads to the more efficient execution of tasks, decreasing wait times and cost. ... And while we are on the subject of banking, let's talk about fraud for a little bit. A bank processes a huge amount of transactions every day. Tracking all of that, analyzing, it's impossible for a regular human being. Furthermore, how fraudulent transactions look changes from day to day. With AI and machine learning algorithms, you can have thousands of transactions analyzed in a second. Furthermore, you can also have them learn, figure out what problematic transactions can look like, and prepare themselves for future issues.


Is The Modern Data Warehouse Broken?

The first problem is the disconnect, really chasm, it creates between the data consumer (analysts/data scientists) and the data engineer. A project manager and a data engineer will build pipelines upstream from the analyst, who will be tasked with answering certain business questions from internal stakeholders. Inevitably, the analyst will discover that data will not answer all of their questions and that the program manager and data engineer have moved on. The second challenge arises when the analyst’s response is to go directly into the warehouse and write a brittle 600 line SQL query to get their answer. Or, a data scientist might find the only way they can build their model is to extract data from production tables which operate as the implementation details of services. The data in production tables are not intended for analytics or machine learning. In fact, service engineers often explicitly state NOT to take critical dependencies on this data considering it could change at any time. However, our data scientist needs to do their job so they do it anyway and when the table is modified everything breaks downstream.


An open invitation for women to join the Web3 movement

It’s important to understand some of the reasons why crypto has received the “boys club” reputation so we can smash it. At its core, I believe that because crypto was billed as a risky investment at the start. Women, who are naturally more risk averse, shielded away from the initial wave. Today, the gap between men and women in crypto aligns with the legacy of traditional investment verticals skewing toward men. ... In order for the movement to grow and gain legitimacy, we need everyone involved. I’d like to challenge men involved in Web3 to think of a woman they can invite to their next meeting. And, I’d like to challenge women to ask questions and see this opportunity as a way to align their wealth with men. This is a moment in which you can change the course of female wealth not just today, but well into the future. There are many women now joining the movement inviting others in, as well. It’s starting. And, I’m so pleased to be at the forefront of the shift. Web3 is making its debut in traditionally female venues now. Look no further than Shopify, the online sales platform, which reports 52% of its customers are women, is creating a marketplace for NFT sales.


It’s not enough for CEOs to empathize with employees

CEOs who live up to Doctorow’s caricature by shutting down their emotions and coldly making decisions that harm people also incur a personal cost. Hougaard adds: “You turn into someone who you probably won’t like.” Often, empathy is touted as the antidote to mean business. But Hougaard thinks that an approach to leadership based solely on empathy has its own adverse side effects. “Leaders can literally take on the suffering of the people that they are inflicting suffering on and experience empathy burnout,” he explained. “Many CEOs tell me that they make multibillion-dollar decisions and sleep fine at night. But when they have to give tough feedback to employees or restructure the workforce, they don’t sleep for weeks.” They’re missing sleep because they don’t realize that empathy is only the first step in dealing with emotionally fraught people issues. “The mantra here is: connect with empathy but lead with compassion,” said Hougaard. “Empathy is nice for people, because they’re not alone anymore, but it’s not really helping them to get out of their suffering. Compassion is an intention. ...”


Is your middle management freezing progress? 4 ways to empower change

It is important to demystify what organizational culture means and how it impacts business outcomes, customer success, and employee satisfaction. It doesn’t have to be a top-down narrative that’s adopted universally: culture can be created at a team level. Managers have a huge influence on the subculture of their part of the organization. Managers can proactively opt to create a positive organizational culture. Adopting an open leadership mindset combined with open management practices evidently impacts key outcomes like customer satisfaction, employee engagement, innovation, and profitability. For an employee, the organization begins with their manager. Managers need to ask “What is the experience I am creating for my team?” Ask basic questions like “when do we want to meet?” and “how do we want to organize ourselves?” If there are bigger decisions to be made, consider how teams could be involved. Now, more than ever, employees are looking for empathy from their executives, to be consulted on their future, not just to have a meaningful say in the decisions that affect them, but what’s being decided on in the first place.



Quote for the day:

"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract