Showing posts with label hybrid work. Show all posts
Showing posts with label hybrid work. Show all posts

Daily Tech Digest - January 14, 2024

Quantum mechanics uncovers hidden patterns in the stock market

What does this mean for the stock market? It implies that higher volatility and a slower reversion to equilibrium amplify herding behavior among investors, especially during times of uncertainty and information asymmetry. The study goes further by testing this model with empirical data from the U.S. stock market. Using the growth rate of gross domestic product (GDP) and forecaster uncertainty as indicators for business cycles and economic uncertainty, respectively, they found a positive correlation between the power law exponent and the GDP growth rate, and a negative correlation with forecaster uncertainty. This confirms their theoretical predictions and highlights the role of economic uncertainty in linking business cycles with herding behavior in stock returns. ... “Our study shows that quantum mechanics can be a useful tool to understand the stock market, a complex system with many interacting agents. We hope that our study can inspire more interdisciplinary research that combines physics and finance to explore the hidden patterns and mechanisms of the stock market,” he states.


'We Never Upskill Fast Enough': NTT DATA Services CEO Bob Pryor on mastering change

It's always a challenge, and to be honest, we never upskill fast enough given the myriad of options available. However, we're heavily investing in training, development, and skilling across all levels. Retaining talent involves helping them acquire more advanced technologies and skills in high-demand disciplines. Individuals tend to find greater satisfaction in roles that require complexity over those that are simpler to master. Constantly evolving the mix of skills, technology, and labour is crucial. Take AI, for example—it doesn't eliminate labour; it enhances people's efficacy when working with AI. In healthcare, top oncologists use advanced AI algorithms for diagnosis, medical devices, and treatment. The challenge isn't whether they are displaced by technology but whether we're scaling them fast enough to use the advanced technologies we're investing in and developing. Working effectively with AI involves having people smart enough to ask the right questions—what to create, what questions to ask, and how to interpret language models. 


5 Ways To Upskill As A Leader And Gain Respect From Your Team

Leadership is about building relationships, not task lists. This year, upskill yourself by building these skills to develop a leadership style that inspires cooperation and motivation, not fear. ... Being polite shows the people around you that you respect them, and they are more likely to return the favor. It costs you nothing to be kind. A basic greeting can go a long way, as can asking about your employees’ weekends, family, etc. Remember to say please and thank you. Never interrupt when your employees are talking, and show that you respect their time, work, and ideas. ... Bossing people around doesn’t feel great long term. You know when there’s tension in your office and when people aren’t glad to see you. It’s not good for your mental health to spend nine hours a day (or more) with people who resent your presence. When you tap into your humanity to create better relationships with your employees and become a leader people enjoy working with, not only will you feel more respected as a person, but you’ll likely also enjoy the benefits of a happier workforce, such as higher productivity, better work and even higher profits.


Yes, We're Still Messing Up Hybrid Work. Here's Where Exactly We're Going Wrong.

Hybrid work environments are dynamic, and what works one day may not be effective the next. Managers must be trained to be flexible in their leadership approach, adapting to the varying needs of their team members. This adaptability also means being open to feedback and willing to continuously learn and evolve their management style. It involves understanding the unique challenges and opportunities of managing remote and in-office team members and being adept at creating a cohesive team culture that bridges the physical divide. Honing communication skills is another key focus. In a hybrid setup, clear and inclusive communication is paramount. Managers need to be adept at conveying their messages effectively across various digital platforms, ensuring that every team member, whether remote or in-office, feels equally involved and informed. ... Developing strategies for remote team building is equally important. Hybrid work models can lead to a sense of disconnection among team members.


It’s time to fix flaky tests in software development

Not only do flaky tests threaten the quality and speed of software delivery, they pose a very real threat to the happiness and satisfaction of software developers. Similar to other bottlenecks in the software development process, flaky tests take developers out of their creative flow and prevent them from doing what they love: creating software. Imagine a test passes on one run and fails on the next, with no relevant changes made to the codebase in the interim. This inconsistent behavior can create a fog of confusion, and lead developers down demoralizing rabbit holes to figure out what’s gone wrong. It’s a huge waste of time and energy. By addressing flaky tests, technology leaders can directly improve the developer experience. Instead of getting tangled up in a web of phantom problems that drain their time and energy, developers are able to spend more time on fulfilling tasks like creating new features or refining existing code. When erratic tests are eliminated, the development process runs much more smoothly, resulting in a more motivated and happier team.


Building Cybersecurity Resilience With the Power of Habit

Clear's principles and philosophy, advocating for small yet consistent changes, should resonate deeply with cyberprofessionals. These principles, while not originally intended for use in the cybersecurity realm, can be creatively applied to construct a robust framework for a resilient cybersecurity culture. Clear's principles can be adapted to the cultivation of cybersecurity habits. ... The journey can begin with the fundamentals, for example, the management of cloud access rights. This involves regularly reviewing who has access to what information or resources and why, revoking access rights when an employee changes roles or leaves the organization, and implementing the principle of least privilege, wherein users are given the minimum levels of access necessary to perform their jobs. These minor changes, when consistently applied, can become the building blocks of an enterprise’s cybersecurity framework. The cumulative effect of such microchanges can be surprising. 


Customer Experience Is King, but CIOs Could Do More to Help

The very nature of how customer experience projects get defined and shepherded places IT at the back of the room, as an executor of tasks but not as a strategic leader. Is this bad? Not necessarily, considering that the end business units interacting with the customer ostensibly have expertise in dealing with customers, and are in the best position to know what customers want. However, as technology becomes a more integral element of the selling, informing, fulfillment and servicing of customers, there also is unique expertise that IT brings to the table. It can be invaluable in improving the customer experience, and that can also avert disaster. Being able to sell non-stop, 24/7 to worldwide customers is a major driver of e-commerce, as is the ability to provide customers with self-service options that can reduce internal operational costs for companies. Analytics, which can assess an individual customer or demographic buying habits and anticipate what customers will want to buy next are seen as beneficial. 


Leveraging Chaos Engineering To Test The Resilience Of Distributed Computing Systems

It helps build the resilience of distributed computing systems and improves their ability to withstand unexpected disruptions. Read on to know how. Chaos engineering leverages the chaos theory to achieve this. Further, the chaos theory introduces random and unexpected behavior in a controlled manner to identify system weaknesses. How does it benefit organizations? By enabling them to identify system vulnerabilities even before they actually occur. As a result, an organization can proactively adopt measures to plug potential vulnerabilities and improve system stability. However, developers associated with a premier software development company use an innovative approach to chaos engineering. ... The concept might look similar to stress testing but they are not the same. There are some key differences. For one, the concept leverages the chaos theory to proactively identify system or network issues and correct them. It also tests and corrects all components at the same time. Here, developers associated with a software development company in New York tend to look beyond possible causes and obvious issues. 


Neither ‘Agile’ nor Architecture are Going Anywhere

Want to move the enterprise to little a or big A agile? Want to modernize the technology stack? Implement flex points in subsystems? Integration effectiveness? Harness information for outcomes? Deliver technology services? Event-Driven Architecture? Customer-Centric Design? Manage cross-system compatibility and quality attributes? Handle mergers and acquisitions well? Project/team thinking do not account for these outcomes. The product owner doesn’t understand them and the development lead is focused on speed, simplicity and delivery. They may not understand them either. Architecture connects big outcomes to little decisions. I have seen huge objectives brought low by simple development decisions. ... From the board room to the basement. From idea to outcome. In between operating responsibilities. In between competing business objectives. With partners. With vendors. With an ever changing technology adoption cycle. From finance to legal to customer impacts, it takes a LOT of fascilitation, discussion, decision making and prioritization to deliver a balanced advantageous technology strategy. 


Demystifying Cloud Trends: Statistics and Strategies for Robust Security

The Shared Responsibility Model is a security and compliance framework that defines the responsibilities of cloud service providers (CSPs) and cloud customers for securing every aspect of the cloud environment, including hardware, infrastructure, endpoints, data, configurations, settings, operating system (OS), network controls and access rights. In basic terms, this model helps clarify who is responsible for securing various aspects of the cloud infrastructure, services, and data. The division of responsibilities varies depending on the cloud deployment model. ... Implementing strong IAM practices enforcing the principle of least privilege to restrict access rights for users and systems and regularly reviewing and updating access permissions can have a major positive impact on an organization’s cloud security posture. It’s as simple as granting users and other cloud resources the authorization to access the required resources only to a required extent. Multi-factor authentication (MFA) adds an additional layer of security, ensuring that only authorized users have access to resources and data. 



Quote for the day:

"We become what we think about most of the time, and that's the strangest secret." -- Earl Nightingale

Daily Tech Digest - October 10, 2023

Crafting Leaders: The finishing touches

The process of narrowing the funnel for identifying future leaders must commence soon after fresh talent is inducted within the organization and certainly long before organizational knocks have bled the spirit, energy and desire-to-be-different from these young men and women. An earlier column explained how alternative fast-track schemes function and ways to choose and groom future leaders from early stages. 2 More recently, I have added two coda to the exposition. When choosing leaders for facing the uncertainties of tomorrow it is not enough to capture their capabilities at the time of selection but take into account the steepness of the slope they have traversed to reach there. 3 That is the best guarantee of future resilience and continued development in spite of handicaps. Moreover, constraints of time and shortage of the right kind of teachers prevent those running to the top of the pyramid from formally refreshing their knowledge and capabilities as frequently as they should. ... The grooming of Fast-Trackers (FTers) must vary substantially from company to company and from individual to individual.


The undeniable benefits of making cyber resiliency the new standard

"It's about practicing due care and due diligence from a cybersecurity standpoint and having a layered defense with a layered people-process-and-technology-driven program with the right governance and services and tools to enable the mission of the organization so that if there's an event, you can recover and adapt to keep business running," he adds. To do that, CISOs and their executive colleagues must have their cybersecurity basics well established -- basics such as knowing their tolerance for risk, understanding their IT environment, their security controls, their vulnerabilities, and how those all could impact the organization's operations. CISOs aren't limited to these frameworks or the assessment tools created specifically to measure cyber resiliency, says Tenreiro de Magalhaes and others. CISOs can also run tabletop drills and red-team exercises to test, measure and report on resiliency. Repeating such drills and exercises can then track whether the organization's cybersecurity program as well as specific additions to it help improve resiliency over time, experts say.


Hybrid work is in trouble. Here are 4 ways to make it work in the longer term

"We're all humans and we work with each other," he says. "To make hybrid working effective, there must be an element of interaction. There must be a connectivity, both to the business and your team." Warne says balance is essential, so find the right reasons for bringing people together in the office. "At River Island, it's about making sure that people are in for a purpose and not just presenteeism, and making sure that the people who need to work together are able to work together," he says. "If you work with a colleague, it's crucial you don't have a situation where one of you comes into the office and the other one works from home." Warne says his team doesn't have mandated days in the office. Instead, his organization's hybrid-working strategy is all about collaboration. ... However, hybrid working has allowed for an even higher level of flexibility in her organization -- and the key to success has been constant communication. Cousineau continues to listen to feedback from her team. One staff member suggested hybrid all-team meetings were creating a big divide between those who were present and those who weren't.


Evolution of stronger cyber threat actors: The flip side of Gen AI story

Deepfake technology, a subset of Generative AI, allows threat actors to create convincing video and audio forgeries. This presents a substantial threat to organisations as deepfake attacks can tarnish reputations, manipulate public opinion, and even influence financial markets. Imagine a scenario where a CEO’s voice is convincingly mimicked, disseminating false information that impacts stock prices; or consider a deepfake video of a prominent figure endorsing a product or idea they never actually supported. Such manipulations can lead to severe consequences for businesses and society at large. Generative AI is revolutionising the way malware is created. Threat actors can use AI algorithms to generate highly evasive and adaptable malware variants that can easily evade traditional signature-based antivirus solutions. These AI-generated malware strains constantly evolve, making detection and containment a significant challenge for cybersecurity professionals. Moreover, Generative AI allows for the customisation of malware based on the target environment. 


The CIO’s primary job: Developing future IT leaders

The challenge for IT management is to find people who are good at their current job but are also interested in the management side that is necessary for departmental success. In my opinion, the reason many IT departments have decided to go outside IT to bring in CIOs is because IT has not fostered the kind of environment that develops these types of professionals. IT has not traditionally tried very hard to develop strong managers from within. Most people learn to manage by watching what their managers do. And if people have bad managers, the results can be less than optimum. So how do we change that conundrum? First, we must commit our current managers and supervisors to a strong management training program. Once they have been trained in the subtleties of management, then we hopefully will begin to see new managers with skills developed from within. Effective management training can, and should be, structured around techniques that current managers use to be successful. Delegating effectively and encouraging career growth among staff are two examples.


Evolution of Data Partitioning: Traditional vs. Modern Data Lakes

In modern data lakes, data is organized into logical partitions based on specific attributes or criteria, such as day, hour, year, or region. Each partition acts as a subset of the data, making it easier to manage, query, and optimize data retrieval. Partitioning enhances both data organization and query performance. Instead of relying solely on directory-based partitioning or basic column-based partitioning, these systems provide support for complex, nested, and multi-level partitioning structures. This means that data can be partitioned using multiple attributes simultaneously, allowing for highly efficient data pruning during queries. ... Snapshots are a fundamental concept used to capture and manage different versions or states of a table at specific points in time. Snapshots are a key feature that enables Time Travel, data auditing, schema evolution, and query consistency within modern Data Lakes like Iceberg tables. Some important features of snapshots are below : Each snapshot represents a specific version of the data table. When you create a snapshot, it essentially freezes the state of the table at the moment the snapshot is taken. 


Will Quantum Computers Become the Next Cyber-Attack Platform?

A quantum cyberattack would likely be similar to today’s identity theft and data breaches. “The only difference is that the damage would be more widespread, since quantum computers could attack a broad class of encryption algorithms rather than just the particular way that a company or data center implements the algorithm, which is how attacks are currently done,” explains Eric Chitambar, associate professor of electrical and computer engineering at the Grainger College of Engineering at the University of Illinois Urbana-Champaign. Chitambar also leads the college’s Quantum Information Group. ... Conducting an enterprise-wide quantum risk assessment to help identify systems that might be most vulnerable to a quantum attack would be a good place to start, Staab says. He also recommends deploying enterprise-wide Quantum Random Number Generator (QRNG) technology to generate quantum-resistant encryption keys. This approach promises crypto agility, implementation of Quantum Key Distribution (QKD) and the development of quantum-resistant algorithms. “As we head toward a quantum computing era, adopting a zero-trust architecture will become more important than ever,” Staab states.


6 Reasons Private LLMs Are Key for Enterprises

Private LLMs can be used with sensitive data — such as hospital patient records or financial data — and then use the power of generative AI to produce groundbreaking achievements in these fields. With the LLM running on your private infrastructure and only exposed to the people who should have access to it, you can build powerful customer-focused applications, chatbots or just provide an easier way for your employees to interact with your company data — without the risk of sending the data to a third party. ... With private LLMs, you can tailor the model and response to your company, industry or customers’ needs. Such specific information is not likely to be included in general or public LLMs. You can feed your LLM with customer support cases, internal knowledge-base articles, sales data, application usage data and so much more, ensuring that the responses you receive are what you’re looking for. ... Controlling versioning or the model you’re using is extremely important because if you change the model that you use to create embeddings, you will need to re-create (or version) all the embeddings you store.


Tech Revolution: The Rise of Automation and Its Impact on Society

To offset potential adverse effects, it is imperative for companies and governments to enact policies and initiatives supporting workers susceptible to automation’s impact. This may encompass training programs designed to furnish workers with the requisite skills to excel in the evolving job market, along with social support programs to aid those grappling with employment challenges. Public policy will emerge as a pivotal determinant of technological evolution’s trajectory and consequences. Economic incentives, education reforms, and immigration policies will directly influence productivity, employment levels, and enhanced economic mobility. ... Central and state government agencies ought to collaborate with industry partners and educational institutions to craft programs that equip new workers with the skills needed to thrive in an automation-driven world. These programs bear the potential to combat emerging inequality by propelling education and training initiatives that foster success for all.


When open source cloud development doesn't play nice

Remember that the cloud provider is merely “providing” the open source software. They are not typically supporting it beyond that. For more, you’ll need to look internally or in other places. Open source users, whether in the cloud or not, often have to rely on community resources, typically provided through forums or message boards, which takes time. This can impede cloud development progress in urgent, time-sensitive scenarios or complex issues. A developer told me once that she needed to attend a meeting of the open source community before she could have a resolution to a specific problem—a meeting that was five weeks out. That won’t work. From a security standpoint, open source software can pose specific challenges. Although a community of developers regularly reviews such software, it can still harbor undetected vulnerabilities, primarily because its code is openly accessible. For instance, some open source supply chain issues arose a few years ago. These vulnerabilities can become severe security threats without stringent security measures and frequent updates. 



Quote for the day:

''Sometimes it takes a good fall to really know where you stand.'' -- Hayley Williams

Daily Tech Digest - October 04, 2023

The Big Threat to AI: Looming Disruptions

As if semiconductor supply chain issues weren’t enough of a problem for AI production, other supply chains are piling on the challenges. "AI is software and open-source code makes up 90% of most codebases, which means the open source software supply chain has just as much, if not more, impact on AI production than regulated hardware components,” says Feross Aboukhadijeh, founder and CEO of Socket. The impact is potentially widespread given there are many open source AI models and tools on the market today and more are coming. ... There are numerous efforts afoot to relieve these concerns and secure a prime slice of the AI market pie. For what corporation does not envy Nvidia right now? “Many countries are trying to increase their piece of the global supply chain capacity and/or to onshore as much as possible through subsidies and other incentives. This has spurred significant investment and activity, but it remains to be seen whether these investments will address the supply chain problems in a timely or appropriate manner,” says Almassy.


When to Scale and When Not to Scale

Scaling is a nuanced decision in the agile journey, bridging the demands of complexity and rapid market needs. While the lure of scaling promises greater coordination, efficient handling of product intricacies, and swifter market responses, it's pivotal to approach it judiciously. It's not just about expanding teams or implementing frameworks; it's about recognizing when the product's complexity or market dynamics truly warrant a scaled approach. On the flip side, scaling without a clear strategy can introduce unforeseen challenges. From the inadvertent hiring of too many junior roles to the formation of functional silos, scaling can sometimes complicate rather than streamline. Additionally, foundational elements, such as a firm grasp of agile practices and automation, can determine the success of scaling endeavors. In essence, scaling is a tool in the agile toolkit—powerful when used correctly but potentially counterproductive if misapplied. Organizations must reflect on their unique scenarios, understanding both the promises and pitfalls of scaling, to ensure they chart a path that genuinely enhances agility, efficiency, and value delivery.


From Big Data to Better Data: Ensuring Data Quality with Verity

High-quality data is necessary for the success of every data-driven company. It enables everything from reliable business logic to insightful decision-making and robust machine learning modeling. It is now the norm for tech companies to have a well-developed data platform. This makes it easy for engineers to generate, transform, store, and analyze data at the petabyte scale. As such, we have reached a point where the quantity of data is no longer a boundary. Yet this has come at the cost of quality. ... Poor data quality in Hive caused tainted experimentation metrics, inaccurate machine learning features, and flawed executive dashboards. These incidents were hard to troubleshoot, as we had no unified approach to assessing data quality and no centralized repository for results. This delay increased the difficulty and cost of data backfills. The lack of centralization in data quality also made the data discovery process inefficient, making it hard for data scientists and data engineers to identify trustworthy data.


AI vs software outsourcing: An opportunity or a threat?

As AI becomes more widespread, the question is whether programmers write code themselves or have chatbots write it. Customers usually expect quality. If AI can help deliver this quality faster, why not? Look, everyone knows that there is a programming language called Java. There are Apache Commons libraries. You can Google it, but can you do something with it? Can you bring value to the business? This is the point. LLM models are a tool, just like a library or a framework. However, it has other capabilities that need to be mastered and used to bring value. It will be a long time before AI can replace developers because there will always be something that needs to be fixed. Either it's an error in the code or something wrong with the configuration. For example, if a bot has already written code that seems to work, but an error appears. The developer can spend little time writing the code but later spends more time looking for the error. Let's take GitHub Copilot. Programmers note that the acceptance rate of suggestions from Copilot is up to 40%. 


Why all IT talent should be irreplaceable

“Great employee” is easy to type. It’s less easy to define. Here’s a short list to get you started. Scrub it by discussing the question with your leadership team. The habit of success: Some employees seemingly don’t know how to fail. Give them an assignment and they’ll figure out a way to get it done. Competence: As a general rule, it’s better to apologize for an employee’s bad manners than for their inability to do the work. Without competence, employees with a strong success habit can do a lot of damage by, for example, creating kludges instead of sustainable solutions. Followership: Leadership is a prized attribute for employees to have. Prized, that is, if they’re leading in their leader’s direction. Otherwise, if you and they are leading in different directions, all your prized leaders will do is generate conflict and confusion. Followership is what happens when they embrace the direction you’re setting and make it their own. Intellectual honesty: Some employees can be persuaded with evidence and logic. Others trust their guts instead. That’s a physiological error. You want people who digest with their intestines but think with their brains.


Do you need both cloud architects and cloud engineers?

We need a collaborative approach with both disciplines. One cannot function properly without the other. For example, I cannot design multicloud-based systems that define different usages for different cloud services on different clouds. ... Many assume that the engineering tasks are the easiest part of the journey to the cloud. After all, if the cloud architect is good, the configuration should work, and it’s just a matter of using sound AI tools to carry out deployment. Even worse, some companies are working just with engineers and hiring specific skills. The company may pick a cloud brand and hire security, application, data, and AI engineers in that cloud platform. They assume that this specific cloud platform is the correct and optimized platform, which will usually cause trouble. Oh, the solutions may work, but it could cost 10 times more to operate. Not surprisingly, these companies have an underoptimized architecture since they’ve given zero consideration to architecture or the use of cloud architects. AI won’t save you from needing a good architecture and a good set of engineering disciplines. 


What IT needs to know about energy-efficiency directives for data centers

New regulations springing up in various regions will be among the drivers of data center sustainability in the months ahead. There are two main groups of regulations emerging that affect data center operations, according to Jay Dietrich, research director of sustainability at Uptime Institute. One is financial reporting modeled on the Task Force for Climate-related Financial Disclosures (TCFD), which requires reporting on energy consumption and efficiency and associated greenhouse gas (GHG) emissions. The other is the European Energy Efficiency Directive (EED), which requires an energy management plan, an energy audit, and reporting of operational data. In addition, there are voluntary, country-specific standards and siting requirements for data center efficiency and operations in various countries around the world, Dietrich says. A current example of a TCFD-related regulation is the E.U. Corporate Sustainability Reporting Directive (CSRD), with reporting requirements rolling out from large to small enterprises beginning in 2025 and continuing until 2028.


What does leadership in a hybrid world look like?

Firms want their best people to stick around and give more of themselves. Studies have shown that improved employee collaboration and alignment with a common purpose is key to achieving that. But what is the best way to make that happen in the way we now wish to work and live our lives? Some suggest that the emergence of generative AI and new work tools can improve productivity regardless of the workplace setting. But perhaps a different, more human, approach is needed? The profound loosening of relationships that employees have with their firm and one another, requires a similarly fundamental reimagining of the role of the leader itself. Ultimately, this will not come through new technology, systems, processes, or HR policy (however well-crafted), but through the actions and behaviours of credible and engaging people managers. Firms need to re-establish a sense of cohesion and that needs people who are exceptional good at doing just that. Businesses can’t just issue ultimatums or mandates; they need a leadership approach that “coheres” employees to feel less remote from one another and the firm.


Six skills you need to become an AI prompt engineer

Prompt engineering is much more of a collaborative conversation than an exercise in programming. Although LLMs are certainly not sentient, they often communicate in a way that's similar to how you'd communicate with a co-worker or subordinate. When you're defining your problem statements and queries, you will often have to think outside the box. The picture you have in your head may not translate to the internal representation of the AI. You'll need to be able to think about a variety of conversational approaches and different gambits to get the results you want. ... While you might not necessarily be expected to write the full application code, you will provide far more value if you can write some code, test your prompts in the context of the apps you're building, run debug code, and overall be part of the interactive programming process. It will be much easier for a team to move forward if the prompt engineering occurs as an integral part of the process, rather than having to add it in and test it as a completely separate operation.


The Cost Dynamics of Multitenancy

Isolating tenants with infrastructure has a higher initial cost, especially as you discover the right size for tenant workloads. Once you understand the cost for a tenant, it provides a very stable cost per tenant. Any unevenness in the cost profile represents a choice of timing. For example, if you use containers per tenant, you must decide when to commission your next cluster. Software-based multitenancy has an early advantage as it keeps the initial product price low. The marginal economics of onboarding a tenant are very low — almost zero. There comes a point when the initial design can no longer manage the load. The first port of call is vertical scaling — adding more power to the infrastructure to handle the load. This increases the cost per tenant but enables further tenants to be added. Eventually, you run out of vertical scaling options and look to horizontal scaling. This requires more investment as you need to handle load balancing, re-architect stateful interactions and introduce technologies such as shared cache.



Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal

Daily Tech Digest - August 11, 2023

How to tell if your cloud finops program is working

A successful finops program should ensure compliance with applicable financial regulations and industry standards. These change across industries, but a few industries, such as finance and health, are more constrained by rules than others. A good finops program will help your company stay current with relevant laws, rules, and regulations, such as GAAP (generally accepted accounting principles) or IFRS (International Financial Reporting Standards). Regular audits and reviews should be conducted to ensure that financial processes and practices align with the required standards and laws. These are often overlooked by cloud engineers and cloud architects building and deploying cloud-based systems since most of them don’t have a clue about regulations and laws beyond the basics. If done well, finops should take the stress off those groups and automate much of what needs to be monitored regarding regulatory compliance. I was early money on finops, and for good reason. We need to understand the value of cloud computing right after deployment and monitor its value continuously. 


Why Data Science Teams Should Be Using Pair Programming

Based on what we learn about the data from EDA, we next try to summarize a pattern we’ve observed, which is useful in delivering value for the story at hand. In other words, we build or “train” a model that concisely and sufficiently represents a useful and valuable pattern observed in the data. Arguably, this part of the development cycle demands the most “science” from data scientists as we continuously design, analyze and redesign a series of scientific experiments. We iterate on a cycle of training and validating model prototypes and make a selection as to which one to publish or deploy for consumption. Pairing is essential to facilitating lean and productive experimentation in model training and validation. With so many options of model forms and algorithms available, balancing simplicity and sufficiency is necessary to shorten development cycles, increase feedback loops and mitigate overall risk in the product team. As a data scientist, I sometimes need to resist the urge to use a sophisticated, stuffy algorithm when a simpler model fits the bill.


Should IT Reinvent Technical Support for IoT?

A first step is to advocate for IoT technology purchasing standards and to gain the support of upper management. The goal should be for the company to not purchase any IoT technology that fails to meet the company’s security, reliability, and interoperability standards, which IT must define. None of this can happen, of course, unless upper management supports it, so educating upper management on the risks of non-compliant IoT, a job likely to fall to the CIO, is the first thing that should be done. Next, IT should create a “no exceptions” policy for IoT deployment that is rigorously followed by IT personnel. This policy will make it a corporate security requirement to set all IoT equipment to enterprise security standards before any IoT gets deployed. Finally, IT needs a way to stretch its support and service capabilities at the edge without hiring more support personnel, since budgets are tight. If something goes wrong at your manufacturing plant in Detroit while technical issues arise at your San Diego, Atlanta, and Singapore facilities, it will be a challenge to resolve all issues simultaneously with equal force.


Why AI Forces Data Management to Up Its Game

With so much storage growth, organizations never reach the point where storage is no longer a constant challenge. The combination of massive capacity growth and democratized AI make it imperative to implement effective data management from the edge to the cloud. A strong foundation for artificial intelligence necessitates well-organized data stores and workflows. Many current AI projects are faltering due to a lack of data availability and poor Data Management. Skilled Data Management, then, has become a key factor in truly realizing the potential of AI. But it also plays a vital role in containing storage costs, hardening data security and cyber resiliency, verifying legal compliance and enhancing customer experiences, decision-making, and even brand reputation. ... Using metadata and global namespaces, the Data Management layer makes data accessible, searchable, and retrievable on whatever storage platform or media it may reside. It adds automation to facilitate tiering of data to long-term storage as well as cleansing data and alerting on anomalous conditions.


Hybrid work is entering the 'trough of disillusionment'

Even though remote and hybrid work practices are in the trough now, that doesn’t mean they’ll stay there. Some early adopters eventually overcome the initial hurdles and begin to see the benefits of innovation and best practices emerge. Until then, the return-to-office edicts continue to roll out. ... Even with an uptick in return-to-office mandates, office building occupancy continues to remain below pre-pandemic levels. The average weekly occupancy rate for 10 metropolitan areas in the United States this week was below 50% (48.6%), according to data tracked by workplace data company Kastle Systems. That occupancy rate is actually down 0.6% from last week. Office occupancy rates change substantially, depending on the day of the week. Tuesdays, Wednesdays and Thursday are the most popular in-office days. Globally and in the US, organizations have moved from ad hoc hybrid work policies, where employees could pick their days in the office, to structured schedules.


Cisco: Hybrid work needs to get better

While organisations in APAC have been progressive in adopting hybrid work arrangements, Patel cautioned them against making the mistake of mandating that employees work in the office all the time. “It’s much better to create a magnet than a mandate,” he said. “Give people a reason to come back to the office because when they collaborate in the office, there’s going to be this X factor that they don’t get when they are 100% remote.” Patel said adopting hybrid work would also help organisations recruit the best talent from anywhere in the world, enabling more people to participate equally in a global economy. “The opportunity is very unevenly distributed right now, but human potential is pretty evenly distributed, so it would be nice if anyone in a village in Bangladesh can have the same economic opportunity as someone in Silicon Valley. “Most of the time, the mindset is that you are distance-bound, so if you don’t happen to be in the same geography, then you don’t have access to opportunity. That’s a very archaic way of thinking and we need to think about this in a much more progressive manner,” he said.


Rethinking data analytics as a digital-first driver at Dow

The first step in this journey involved bringing our D&A teams under one roof in the first half of 2022. This team eventually became Enterprise D&A, with team members based around the world. To develop the strategy, we held discussions with external partners and interviewed Dow leaders to identify trends important to business success. Then we looked at where those trends align with key focus areas like customer engagement, accelerating innovation, market growth, reliability, sustainability, and the employee experience. Our central task was to translate our findings into a strategy that creates the most value for our stakeholders: our customers, our employees, our shareholders, and our communities. We determined we needed to move to a hub-and-spoke model. To make this work and achieve our vision of transforming data into a competitive advantage, we would need to build a strong culture of collaboration around D&A and support it with talent development within our organization and across the company.


Why data isn’t the answer to everything

What happens when you disagree with the AI? What are you then going to go and do? If you’re always going to disagree with it and do what you wanted to do anyway, then why bother bringing the AI in? Have you maybe mis-written your requirements and what that AI system is going to go and do for you? A lot of this is the foundational strategy on organisational design, people design, decision making. As an executive leader, it’s really easy to stand up on stage and say, ‘Here’s our 2050 vision or our 2030 vision.’ At the end of the day, an executive doesn’t do much, they just create the environment for things to happen. It’s frontline staff that make decisions. There are two reasons why you wouldn’t make a decision: you don’t have the right data and context or you don’t have the authority to make that decision. Typically, you only escalate a decision when you don’t have the data and context. It’s your manager that has more data and context, which enables that authority. So, with more data and context, I can push more authority and autonomy down to the frontline to actually go and drive transformation. 


Whirlpool malware rips open old Barracuda wounds

The vulnerability, according to a CISA alert, was used to plant malware payloads of Seapsy and Whirlpool backdoors on the compromised devices. While Seapsy is a known, persistent, and passive Barracuda offender masquerading as a legitimate Barracuda service "BarracudaMailService" that allows the threat actors to execute arbitrary commands on the ESG appliance, Whirlpool backdooring is a new offensive used by attackers who established a Transport Layer Security (TLS) reverse shell to the Command-and-Control (C2) server. "CISA obtained four malware samples -- including Seapsy and Whirlpool backdoors," the CISA alert said. "The device was compromised by threat actors exploiting the Barracuda ESG vulnerability." ... Whirlpool was identified as a 32-bit executable and linkable format (ELF) that takes two arguments (C2 IP and port number) from a module to establish a Transport Layer Security (TLS) reverse shell. A TLC reverse shell is a method used in cyberattacks to establish a secure communication channel between a compromised system and an attacker-controlled server.


How digital content security stays resilient amid evolving threats

AI technology advancements and the great opportunities it provides have also motivated business leaders and consumers to reassess the underlying trust models that have made the internet work for the past 40 years: every major advance in computing tech has stimulated sympathetic updates in the computer security industry, and this recent decisive move into a world powered by data, and auto-generated data, is no different. Provenance will become a key component in determining the trustworthiness of data. The changes though extend beyond technology. Rather than continuing to use systems that were built to assume trust and then verify, businesses and consumers will change and use verify then trust systems which will also bring mutual accountability into all processes where data is shared. Standards, open APIs and open-source software have proven to be adaptable to changing technology previously and will continue prove adaptable in the age of AI and significantly higher volumes of digital content.



Quote for the day:

"He who wishes to be obeyed must know how to command" -- Niccol_ Machiavelli

Daily Tech Digest - August 02, 2023

Return-to-office mandates rise as worker productivity drops

In the first quarter of 2023, labor productivity dropped 2.1% in the US, even as the number of hours worked increased by 2.6%, according to the BLS. The highest levels of remote workers are in North America and Northern Europe, with lower levels in Southern Europe, and even fewer still in Asia — particularly in developing countries, according to a study by Stanford University’s Institute for Economic Policy Research (SIEPR) released in July. ... “Bosses want workers back in the office; workers want flexibility,” said Peter Miscovich, the managing director of Jones Lang LaSalle IP (JLL), a global real estate investment and management firm that tracks remote work trends. But current return-to-office mandates haven't always been effective and they risk driving employees away, according to Miscovich. "Given current low-unemployment rates — particularly in technology fields — talent has the upper hand and will have the upper hand over the next 10 to 15 years,” Miscovich said. While some companies have drawn attention for heavy-handed tactics to get employees back to the office, others are succeeding for getting buy-in for structured hybrid work policies.


IT professionals: avoiding bad days at work

The most common cause of stress is work-related, with one recent study showing that 79% of UK professionals say they frequently feel stressed and our own research revealed that over two-thirds of IT leaders(70%) reported that there is pressure to deliver security protection in a short amount of time. Whilst organisations must be able to identify the sources of stress to support their people, unfortunately, it must be noted that due to the nature of working with technology, IT professionals will encounter stressful situations – whether the solution is to turn it off and on again or something much more serious. Having the right mix of people, processes and technology will assist in minimising these situations; however, when they do occur, it is vital that leaders are able to recognise these situations and support their people This comes back to ensuring the most appropriate technology is in place, along with having clear plans and processes in place to best support the needs of the organisation, its people and its customers.


Why synthetic data is a must for AI in telecom

Synthetic data reflects real-world data both mathematically and statistically. But rather than being collected from and measured in the real world, it is created by computer simulations, algorithms, simple rules, statistical modeling, simulation and other techniques based on small, anonymized real-world samples. “While real data is almost always the best source of insights from data, real data is often expensive, imbalanced, unavailable or unusable due to privacy regulations,” Gartner VP analyst Alexander Linden said in a Q&A blog post. “Synthetic data can be an effective supplement or alternative to real data.” Artificial data can help mitigate weaknesses in real data or can be used when no live data exists, when data is highly sensitive or otherwise biased, or can’t be used, shared or moved. But it doesn’t always have to be trained on real data, however: It can be generated just by looking at domain or institutional knowledge or traces of real data. With the massive explosion in the use of data-hungry generative AI models and the necessity of privacy and security, enterprises across industry segments are recognizing the potential in synthetic data


DDoS Attacks and the Cyber Threatscape

Occasionally, DDoS attacks were carried out to extort ransom payments, colloquially known as Ransom DDoS (RDDoS) attacks. The RDDoS attack should not be mistaken for ransomware, which may be driven by similar motivations but employs different tactics, techniques, and procedures (TTPs). The operational method in ransomware requires ‘denial of data’ by a malicious script, whereas RDDoS involves denial of service, generally by a botnet. Running a ransomware operation requires access to internal systems, which is not the case in ransom DDoS attacks. In RDDoS, threat actors leverage the threat of denial of service to conduct extortion, which may include sending a private message by email demanding ransom amount to prevent the organisation from being targeted by a DDoS attack. According to a threat intelligence report, throughout the 2020–2021 global RDDoS campaigns, attacks ranged from few hours up to several weeks with attack rates of 200 Gbps and higher. The DDoS attack can also serve as a means of reconnaissance, allowing attackers to assess the target’s vulnerabilities and gauge the strength of its defenses.


MDM’s Role in Strengthening Data Governance Practices

Ensuring regulatory compliance and the trustworthiness of data is paramount. This is where a systematic process comes into play, and Gartner MDM is leading the way in providing a comprehensive solution. With the ability to configure data governance policies, capture metadata, and perform data lineage, Gartner MDM allows for a full understanding of data assets and their use. This translates into improved compliance, reduced risk, and enhanced data trustworthiness. By implementing a systematic process that includes Gartner MDM, organizations can confidently navigate the complex landscape of regulatory requirements, safeguard data integrity, and ultimately increase customer trust. ... Data Governance has become essential with the ever-increasing amount of data organizations generate. However, manually reviewing and managing such a large amount of data can be challenging and time-consuming. This is where automation techniques come into play. By automating data governance processes, organizations can streamline the process, reduce errors, and make better decisions resulting from the data. 


Delivering privacy in a world of pervasive digital surveillance: Tor Project’s Executive Director speaks out

Our stance is clear, we think that encryption is a right – which is why it is built into our technology. As more and more aspects of our lives are carried out digitally, whether it is conducting financial transactions, accessing health care services or staying in touch with friends and loved ones, our online activity should be governed by the same rights to privacy and anonymity as our analog experiences. As part of our work, the Tor Project is currently active in the debate around the need to safeguard EE2E. We are engaged in advocacy work on the issue and have supported other organizations in their efforts to raise awareness, especially as part of the Global Encryption Coalition. ... Earlier this year, we launched the Mullvad Browser, a free, privacy-preserving browser offering similar protections as Tor Browser without the Tor network. Mullvad Browser is another option for internet users who are looking for a privacy-focused browser that doesn’t need a bunch of extensions and plugins to enhance their privacy and reduce the factors that can accidentally de-anonymize themselves.


The Debate Around AI Ethics in Australia is Falling Far Behind

In 2016, the World Economic Forum looked at the top nine ethical issues in artificial intelligence. These issues have all been well-understood for a decade (or longer), which is what makes the lack of movement in addressing them so concerning. In many cases, the concerns the WEF highlighted, which were future-thinking at the time, are starting to become reality, yet the ethical concerns have yet to be actioned. ... The WEF noted the potential for AI bias back in its initial article, and this is one of the most talked-about and debated AI ethics issues. There are several examples of AI assessing people of color and gender differently. However, as UNESCO noted just last year, despite the decade of debate, biases of AI remain fundamental right down to the core. “Type ‘greatest leaders of all time’ in your favorite search engine, and you will probably see a list of the world’s prominent male personalities. How many women do you count? An image search for ‘school girl’ will most probably reveal a page filled with women and girls in all sorts of sexualised costumes. ...”


Vigilance advised if using AI to make cyber decisions

Artificial intelligence (AI) and machine learning (ML) driven tools and technologies are on the rise to help organizations address these challenges by significantly improving their security posture efficiently and effectively. Tools using ML and AI are improving accuracy and speed of response. ... The vendor may have utilised AI in various product development stages. For instance, AI could have been employed to shape the requirements and design of the product, review its design or even generate source code. Additionally, AI might have been used to select relevant open-source code, develop test plans, write the user guide or create marketing content. In some cases, AI could be a functional product component. However, it’s important to note that sometimes an AI capability might really be machine learning (ML). Determining the legitimacy of AI claims can be challenging: the vendor’s transparency and supporting evidence are crucial. Weighing the vendor’s reputation, expertise and track record in AI development is vital for distinguishing authentic AI-powered products from “snake oil.”


3 GitOps Myths Busted

It is highly likely that as your organization embarks on its cloud native journey, there will come a point where scaling to multiclusters becomes necessary. For instance, developers may need to work on and test applications before making pull requests without having direct access to the production code, of course, for applications running in production on Kubernetes. Moreover, in certain scenarios, a team might manage multiple clusters and distribute workloads among them to ensure sufficient fault tolerance and availability. For example, when running a machine learning training workload, the team might increase the number of replicas or cluster replicas to meet specific demands. Additionally, different clusters may be deployed across various physical locations in cloud environments, whether on Amazon Web Services, Azure, GCP and others, requiring separate tools and processes to align with geographic mandates, legal restrictions, compliance requirements, and data access policies.


Simplifying IT strategy: How to avoid the annual planning panic

In developing your strategy, you have two responsibilities related to the finances of any proposed project: First, you must articulate the costs and benefits of the project; and second, you must contextualize those costs and benefits by comparing them to overall budget projections, which should include multi-year projections that align with the needs and norms of your finance organization. Not sure how to frame the numbers? Borrow revenue projections from FP&A, then layer in projected IT run-rate spend, IT project spend for each year in the forecast, and summarize total IT spend as a percentage of revenue. Hint: Be ready to explain any increase in this metric. ... What will you need from others for your plan to succeed? Dedicated resources from BUs and functions? Participation in steering committees? Incremental funding? The point is you can’t drive a transformation alone. Key to success will be clarifying roles and responsibilities and ensuring others have skin in the game. ... Once you’ve tried answering the questions, consult your deputies. Test and refine your hypothesis as a group. 



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - July 19, 2023

This is why personal encryption is vital to the future of business

We already recognize that humans are the weakest link in any security infrastructure. But what isn’t sufficiently recognized is that any action that puts those humans more at risk makes anyone they work for more vulnerable. A well-resourced attacker will simply identify who works at the company they're aiming for and then find ways to compromise some of those individuals using seemingly unrelated tricks. That compromised data will then feed into more sophisticated attacks against the actual target. So, what makes it easy to create those customized attacks in the first place? Information about those people, what they enjoy, who they know, where they go, and how they flow. That’s precisely the kind of data any weakening in end-to-end encryption for individuals makes easier to get. Because if you weaken personal data protection in one place, you might as well weaken it in every place. And once you do that, you’re presenting hackers and attackers with a totally tempting table of attack surface treats to chow down on. This is not clever, nor is it sensible.


Data protection and AI - accountability and governance

Part of risk remediation will include having policies and procedures in place that ensure operational staff have sufficient direction as to their roles and responsibilities. These should be readily available and supported by training. Risk management policies will need to be implemented or existing policies updated to address AI-specific considerations. For example regarding obtaining and handling AI training and test data, procuring and assessing external software, allocating roles and responsibilities for validation and independent sign-off of AI system development, deployment and updates (which may also include a role for an ethics committee) as well as ensuring policies relevant to automated decision making that address risks of bias, prejudice or lack of interpretability. ... The UK GDPR requires controllers to be transparent with individuals about how their personal data will be collected and processed within AI systems, including by telling them how and why such data will be processed and in explaining any decisions made with AI, how long any personal data will be retained and who it will be shared with. For further information about transparency in AI systems see here.


E-Waste: Australia’s Hidden ESG Nightmare

For Australian enterprises, e-waste is an IT life-cycle challenge, as much as an environmental one. With an increasingly decentralized workforce, IT teams are struggling to keep up with patch maintenance as well as the provisioning and deployment of new devices in such a way that it doesn’t disrupt operations. Consequently, these organizations are prone to create unnecessary e-waste through their poor processes, which can incur several consequences for a business. ... It remains true that managing e-waste at scale can be a logistical challenge for organizations. The best solution would be for IT teams to work with their suppliers and partners to establish a cyclical logistics chain, where older equipment is automatically fed back to the vendor and added to their e-waste management programs using the same logistics that deliver new technology. With the right partners and suppliers, which can offer reliable data-wiping services, the IT team will be able to manage the challenges of e-waste management in Australia. Largely due to these risk factors, the costs of poorly managing e-waste is likely to accelerate rapidly in the months ahead.


The draft data privacy law surprises with its simplicity

For the most part, the draft Digital Personal Data Protection Bill was pretty much what we had been promised—simple, principles-based and generally appropriate for our current stage of maturity. Most businesses I spoke with confirmed that, if passed as is, they would have no problem complying with the obligations it imposed after a reasonably short transition period. To be clear, there were things we would have liked to see changed—clauses that needed to be tweaked and others I would have liked removed. I had an opportunity to engage in the consultations that followed and found the government not just willing to hear our points of view, but keen to understand what impact the text of the draft would have on implementation of the law. In a truly democratic process, it is impossible for everyone’s suggestions to be incorporated, especially when they come from different perspectives. I know that is probably the case for several of my suggestions, but I know that where there exists a multiplicity of views, it is only possible for one to be reflected. 
The question is how an enterprise can use its data to do more than just do cool things? Enterprises are considering how their data can help shareholders. Kobielus wrote TDWI’s Best Practices Report with an eye to determining the chief factors that contribute to data monetization success. He found what he calls “four strategies for data monetization.” “The first one may not, at first glance, sound like a key strategy for monetization of data at all, but it is. It is data democratization -- giving everybody in your organization access to the best data you have to support data-driven analytics,” such as performing queries and producing reports. Enterprises can see the payoff of data democratization in terms of qualitative factors (such as employees working smarter), but there are quantitative factors as well, such as making better business decisions that enable the organization to boost sales, hold on to customers, or upsell to existing customers “When we talk about data monetization, it's a maturity model, where you move from data democratization to operationalize data . 


Managing Human Risk Requires More Than Awareness Training

The first step in managing human risk is to conduct a risk assessment to identify the risk factors most critical to the organization. Sound familiar? To be successful, a risk analyst must assess the likelihood of a vulnerability being exploited and the impact that would occur because of the event. To find these threat sources, the security operations team should be engaged to uncover documentation regarding cyberincidents, threat intelligence and mitigation plans from past audits. The security operations team also tests users on the likelihood of penetration, for example, through phishing simulation exercises. Once an assessor has this information, they can build a risk register to prioritize the highest risk factors. Any educator knows that it is not possible to teach someone everything that they need to know and expect them to retain all the information. ... For example, employees in an organization should be made aware of the risk associated with phishing attacks or identity theft efforts that engage employees through attack vectors such as emails, texts or phone call


A quick intro to the MACH architecture strategy

At the very least, most software teams are likely putting one or more MACH elements to considerable use already. In that case, this evaluation will help reveal which of the four components your organization might be overlooking. For instance, if your organization is currently deploying microservices-based applications on individual servers, deploying those applications in containers across a cluster of servers would be one way to closer align with a MACH strategy. Another plausible scenario is that a software team already uses microservices and cloud-native hosting, but isn't yet managing APIs in a way that positions it at the center of application design plans and build processes. Adopting an API-first development strategy -- that is, one that places a priority on determining how APIs will behave and addresses specific business requirements before any actual coding starts -- would place that team one step closer to proper MACH adoption. However, for teams that are truly starting at square one, such as those still running a localized monolith, it often makes the most sense to start out with headless application design. 


Is PC-as-a-Service part of your hybrid work strategy?

Getting new PCs into the hands of employees and making sure they’re regularly refreshed is complex. The old models of centralized staging and warehousing can create delays and excess shipping costs in today’s hybrid workstyles. Moreover, IT teams struggle to find time to manage day-to-day PC lifecycle tasks. ... By taking this service-oriented approach to PC management, IT teams will spend less time managing and supporting devices, freeing up time to focus on projects that have a greater impact on the business. From a financial perspective, Dell APEX PCaaS flips the script of employee device purchasing from a fixed cost to a predictable, monthly expenditure. Payments that spread out over time—like leasing a car or subscribing to cable services—align with your experience of consuming cloud software while affording you flexibility in how you plan your budget and allocate people resources. With Dell APEX PCaaS you can help your overworked IT staff deploy, support, and manage PCs, reducing time to value and total cost of ownership while ensuring that employees remain productive.


Why and how CISOs should work with lawyers to address regulatory burdens

As the regulatory burden increases, organizations and CISOs are having to take ownership of cyber risk, but it needs to be seen through the lens of business risk, according to Kayne McGladrey, field CISO with Hyperproof. Cyber risk is no longer simply a technology risk. "The problem is, organizationally, companies have separated those two and have their business risk register and their cyber risk register, but that’s not the way the world works anymore," says McGladrey. He believes the Securities and Exchange Commission (SEC), the Federal Trade Commission, FTC and other regulators in the US are trying to promote collaboration among business leaders because cyber risks are functionally business risks. ... However, not all CISOs are naturally well versed in defining the business case of cyber risk, and McGladrey believes CISOs who are more adept at articulating the business value of doing cybersecurity will find it easier to achieve buy-in, while those with a more technical background that emphasize compliance over business risk may find it more difficult to get support and budget.


Stress Test: IT Leaders Strained by Talent Shortage, Tech Spend

George Jones, CISO at Critical Start, says a shortage of skilled professionals has led to delays in certain projects and increased workloads for existing team members. “To combat these delays, we have looked at upskilling current employees, brought in interns with specific skill sets, leveraged contract and freelance workers, and implemented knowledge-sharing to encourage cross-functional collaboration, empowering employees to learn from one another,” he says. He explains Critical Start employees have clearly defined roles and responsibilities that align with their team and organizational goals, and cross-functional collaboration is encouraged to leverage diverse perspectives and expertise. “Agile methodologies promote transparency, adaptability, and iterative progress and foster a culture of psychological safety where individuals feel comfortable sharing ideas, taking risks, and learning from failures,” he adds. Jones says to foster a culture of communication and collaboration, my teams meet regularly to share knowledge, project updates, and provide feedback on what is working and what isn’t.



Quote for the day:

"When your values are clear to you, making decisions becomes easier." -- Roy E. Disney