Daily Tech Digest - November 06, 2023

Business Continuity vs Disaster Recovery: A Guide to Key Differences

Business continuity is like an umbrella, covering every aspect of your business that could be impacted by disruptions – not just technology. Think of it as the master plan that keeps your entire operation functioning when faced with challenges. In contrast, IT disaster recovery is more specific; its focus lies in restoring systems, applications and data after an interruption occurs in tech infrastructure due to any number of causes – natural disasters, cyber-attacks or human error. The first major difference between these two concepts comes down to their scope. While business continuity covers all areas affected by potential disruptions, IT disaster recovery focuses on ensuring technological infrastructures remain functional following crises. Secondly, they have different end goals: while business continuity aims at maintaining essential functions across the organization during a crisis situation till normalcy returns; IT disaster recovery’s objective is getting systems back up and running post-interruption. A third distinction lies within timeframes: A Business Continuity Plan often has longer-term solutions compared to quicker response times expected from an effective Disaster Recovery Plan.


Unlocking the power of multi-cloud

In the era of digital transformation and widespread cloud migration, ensuring robust data security has become a paramount concern for enterprises. The introduction of regulations, such as the Digital Personal Data Protection Act 2023, extends the scope of compliance to smaller businesses, emphasizing the need for comprehensive data protection strategies. End-to-End Data Security Platforms: To address the evolving landscape of data security, businesses are advised to adopt full end-to-end data security platforms. These platforms serve a multifaceted role, helping organizations discover, protect, monitor, and respond to threats across on-premises and cloud environments. Structured and Unstructured Data Management: Platforms should enable the discovery and classification of both structured and unstructured data, providing a comprehensive view of data assets. This capability is crucial for effective data management and compliance efforts. Continuous Monitoring for Risk Mitigation: Implementing continuous monitoring practices is essential for reducing the risk of data breaches. This involves vigilant oversight of data access across on-premises and multiple cloud environments.


Shadow IT use at Okta behind series of damaging breaches

Okta CISO David Bradbury said: “The unauthorised access to Okta’s customer support system leveraged a service account stored in the system itself. This service account was granted permissions to view and update customer support cases. “During our investigation into suspicious use of this account, Okta Security identified that an employee had signed into their personal Google profile on the Chrome browser of their Okta-managed laptop. “The username and password of the service account had been saved into the employee’s personal Google account. The most likely avenue for exposure of this credential is the compromise of the employee’s personal Google account or personal device,” he said. Bradbury added: “We offer our apologies to those affected customers, and more broadly to all our customers that trust Okta as their identity provider. We are deeply committed to providing up-to-date information to all our customers.” Okta said its investigation had been complicated by a failure to identify file downloads in customer support vendor logs. 


Getting Aggressive with Cloud Cybersecurity

The best way to get started is by evaluating vendors that offer proactive cloud security tools and determining their capabilities, Dalling advises. He also suggests reviewing the existing cloud-native inventory and security techniques. “Work with your organization’s security operations center to determine the most effective way to integrate a proactive cloud security tool into their monitoring and incident response workflows,” Dalling adds. By adopting a proactive cloud security approach, organizations can safeguard themselves against security threats, ensure compliance, and increase customer trust, says Ravi Raghava, vice president of cloud solutions at technology integrator SAIC via email. “This approach is often more cost effective than dealing with the aftermath of a security breach, which can result in substantial financial and reputational losses.” He notes that business partners are more likely to trust organizations that prioritize the protection of their data through proactive security steps.


Lessons From 100+ Ransomware Recoveries

Your data retention policy is how long you keep data for regulatory or compliance reasons, and how you remove it when it’s no longer needed. Ransomware attackers have evolved their methods. They know you are less likely to pay out if you can quickly switch over to Disaster Recovery systems. They are now delaying detonation of ransomware to outlast typical retention policies. This is the limitation of DR solutions. While they are the fastest way to recover, they have a limited number of versions or days you can recover to. For one of our manufacturing customers – using both our BaaS and DRaaS products – the ransomware was present on their systems for around three months. That meant that every DR recovery point was compromised, and we had to recover from backups. The Recovery Time Objective (RTO) was a day. We recovered from backups, so it took longer than DR but relatively speaking, it was a fast recovery. The Recovery Point Objective (RPO), however, was from three months prior. The challenge that the organisation then faced was how to re-create that lost data. 


Exploring the global shift towards AI-specific legislation

It is vital that the public – but moreover, all stakeholders – be involved in discussions around AI. The technology companies developing AI, for example, are likely the best placed to understand the technology fully and can help guide any such discussion. Those organizations deploying the technology must also be closely involved, as they have a particular viewpoint to offer. Governments also need to be a part of the discussion. The position of various nations can offer value and help steer the decision-making of all those governments represented in this context. Finally, let’s not forget the general public, the individuals whose data will likely be processed by the technology. All play valuable yet different roles and will come with different viewpoints that should be aired and considered. ... Legislation or any form of regulation is often seen as restrictive: by its very nature, it comprises a set of rules that govern. That is often interpreted as “restrictive” and hinders development, innovation, and technological advancement in this context. That is a generalist, simplistic, and somewhat dismissive view.


The 10 Biggest Cyber Security Trends In 2024 Everyone Must Be Ready For Now

With the work-from-home revolution continuing, the risks posed by workers connecting or sharing data over improperly secured devices will continue to be a threat. Often, these devices are designed for ease of use and convenience rather than secure operations, and home consumer IoT devices may be at risk due to weak security protocols and passwords. The fact that industry has generally dragged its feet over the implementation of IoT security standards, despite the fact that the vulnerabilities have been apparent for many years, means it will continue to be a cyber security weak spot – though this is changing. ... Two terms that are often used interchangeably are cyber security and cyber resilience. However, the distinction will become increasingly important during 2024 and beyond. While the focus of cyber security is on preventing attacks, the growing value placed on resilience by many organizations reflects the hard truth that even the best security can’t guarantee 100 percent protection. Resilience measures are designed to ensure continuity of operations even in the wake of a successful breach. 


Andrew McAfee – ‘Human beings are chronically overconfident’

All of us, as human beings, are chronically overconfident. It’s the most common cognitive bias. That means that your brain children are going to be very, very dear to you, to the point that you’re probably unable to see the holes and the flaws. So that’s a problem. The solution is other people. This is how science works. This is why I describe one of the great geek norms is simply as “science”. Science is really subjecting your ideas to the scrutiny of other people, and then having evidence-based discussions about the merits of those ideas. Is this good? Is this correct or not? One thing you can absolutely start doing is being a little less fond of your own ideas and stress testing those ideas early and often with other people. Another thing we can do is acknowledge other people’s good ideas. Just start saying, “That’s a really good idea, thanks. I hadn’t thought of that. Maybe we should take a different approach here.” Those kinds of statements are super powerful, especially when they are coming from leader in an organisation, because as humans we are wired to take are cues from the people who have high status in an organisation. especially coming from leaders in an organisation. 


IT leader’s survival guide: 8 tips to thrive in the years ahead

With so many disruptive technologies emerging at once, and IT leaders pulled in to solving so many more business challenges, it’s easy to get caught up in the fervor. But in addition to embracing change, IT leaders need to develop a multifaceted approach to navigating current technology and business challenges, says Sanjay Srivastava, chief digital strategist at Genpact. “IT leaders need to adapt by adopting a holistic approach that focuses on resilience, agility, diversification, and collaboration,” Srivastava says. “In this evolving IT investment landscape, the definition of risk has not changed, but the timeframe for response has shortened.” ... It can be difficult to adapt quickly as technology advances, while working to comply with varying regulations across state lines and borders. “The challenge is that the technology footprint — and our understanding of potentials and pitfalls — is still maturing, for instance with generative AI. It’s understandable and expected that regulations will evolve, and working through the changes coming in an otherwise long-term tech stack will be key to getting it right,” he says.


Empowered Agile Transformation – Beyond the Framework

The Executive team could be working 10 to 20 years out of date, because their expertise and experience that got them to their current position has lost its relevance in a world of accelerated change. Their approach can be to apply past experience to current problems. Their 20-year-old solutions are incompatible with contemporary problems. They need to retrain to adopt flexible systems that adapt to new challenges. Otherwise, the workers are constrained by the level of understanding of group executives, and progress is inhibited. They are impeding their teams’ potential. We have all the tools to work contemporaneously today. We have the technology, tools, and experience to leverage agility in delivering value. It is now the executive leaders and company boards fighting the new way; a more collaborative way to generate value for businesses and their customers. The solution is to understand their current customers’ problems, and identify threats to their business models, while gaining the skills and competencies to apply contemporary ways of working.



Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart

Daily Tech Digest - November 05, 2023

Less Code Alternatives to Low Code

Embracing a “minimalist coding” philosophy is foundational. It’s anchored in a gravitation toward clarity, prompting you to identify the indispensable elements in your code, and then discard the rest. Is there a more succinct solution? Can a tool achieve this outcome with less code? Am I building something unique and valuable or rehashing solved problems? Every line of code must be viewed for the potential value it delivers and the future burden it represents. Reduce that burden by avoiding or removing code when you can and leveraging the work of others. ... Modern frameworks offer a significant enhancement to development productivity, primarily by reducing the amount of code written to perform common tasks. Additionally, the underlying code of the framework is tested and maintained by the community, alleviating peripheral maintenance burdens. The same goes for code generators; they’re not merely about avoiding repetitive keystrokes, but about ensuring that the generated code itself is consistent and efficient. 


Software Deployment Security: Risks and Best Practices

Blue-Green Deployment is a release management strategy designed to reduce downtime and risk by running two identical production environments, known as Blue and Green. At any time, only one of these environments is live, with the live environment serving all production traffic. The primary security implication of the Blue-Green deployment is the risk of data inconsistency during the switchover. If not properly managed, sensitive data could be exposed, lost or corrupted. Furthermore, because two environments are maintained, security measures must be duplicated, potentially leading to inconsistencies and vulnerabilities if not properly managed. ... Canary deployment is a strategy where new software versions are gradually rolled out to a small subset of users before being deployed to the entire infrastructure. This strategy allows teams to test and monitor the performance of the new release in a live environment with less risk. Canary deployment can potentially expose new software versions to a smaller user base, potentially exposing vulnerabilities before a full-scale release. If a vulnerability is exploited during this stage, it could lead to a security breach affecting a subset of users. 


It’s time to take your genAI skills to the next level

The workforce of the future will learn AI in school and during the next 15 years, each successive generation of graduates will likely have much stronger AI kung fu than the last. In fact, my own son owns a Silicon Valley based startup called Chatterbox, which exists to teach AI literacy to kids as young as eight years old. Learning AI at that age is unimaginable to adults currently in the workforce. Young workers entering the workforce will have a vastly superior knowledge of, and ability with, AI than the workforce that went to school before the LLM-based genAI revolution of 2022 and 2023. That’s why one of the smartest things you can do now, regardless of your specific occupation, is to get very serious about learning a lot more about genAI. “Prompt engineering” — the ability to use words to get output from genAI tools — is the skill of the year. But it’s only a matter of time before basic proficiency in prompt engineering becomes commonplace and banal. It’s important to set yourself apart from the crowd by going further and really studying how generative AI works, its limitations and potentialities, and the ethical and legal issues are around its output.


Why digital banking is a crucial financial literacy skill for kids

By starting early and providing guidance, parents and educators play a crucial role in helping children develop strong financial literacy skills. Digital banking not only enables children to understand the mechanics of money but also fosters a healthy relationship with finances. For instance, some innovative neobanks in India are currently providing prepaid cards that are exceptionally user-friendly and intuitive. These cards offer a unique opportunity for children to develop crucial financial literacy skills, such as prudent money management, efficient budgeting, and smart savings habits. ... The positive impact of early financial education, including digital banking literacy, on long-term financial well-being cannot be overstated. Introducing children to digital banking at a young age provides them with the knowledge and skills needed to make informed financial decisions throughout their lives. It not only equips them with the tools to navigate the cashless economy effectively but also fosters financial independence, responsibility, and resilience in the face of evolving financial challenges.


Mastering a multi-cloud environment

It is essential to understand the challenges that exist while creating a robust multi-cloud architecture. You need to incorporate the right set of tools and technologies to support workload placement across diverse platforms and services. A solid operating model to effectively manage multi-cloud use is imperative – breaking it down into process security, technology, financial operations and people and skills. One of the keys is aligning IT service management with your multi-cloud operating model – implementing the right technology to effectively operate, manage, monitor and secure resources and services among providers – from data management, governance and security to vendor licenses, contracts and more. ... In today’s fast-changing and threat-laden environment, a new approach to resilience is indispensable – one that helps ensure your ability to ‘bounce back’ quickly from disruptions and maintain application availability. New functional capabilities and skills to embed resilience through design is the way forward and it will likely require businesses to give resilience greater priority as they invest in innovation.


Do we have enough GPUs to manifest AI’s potential?

The current production and availability of GPUs is insufficient to manifest AI’s ever-evolving potential. Many businesses face challenges in obtaining the necessary hardware for their operations, dampening their capacity for innovation. As manufacturers continue ramping up GPU unit production, many companies are already being hobbled by GPU accessibility. According to Fortune, OpenAI CEO Sam Altman privately acknowledged that GPU supply constraints were impacting the company’s business. ... Exploring alternative hardware to power AI applications presents a viable route for organizations striving for efficient processing. Depending on the specific AI workload requirements, CPUs, field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) may be excellent alternatives. FPGAs, which are known for their customizable nature, and ASICs, specifically designed for a particular use case, both have the potential to effectively handle AI tasks. However, it’s crucial to note that these alternatives might exhibit different performance characteristics and trade-offs.


The state of API security in 2023

Only 38% of organizations have solutions that enable them to understand the context between API activities, user behaviors, data streams, and code execution. In hyper-connected digital ecosystems, understanding this data is crucial. An anomaly in user behavior or a suspicious data flow might be early indicators of a breach attempt or a vulnerability exploitation. Moreover, the capability to tailor security responses based on dynamic threat parameters is indispensable. While generalized security protocols can thwart common threats, customized defenses based on threat actors, compromised tokens, IP abuse velocity, geolocations, IP ASNs, and specific attack patterns can be the difference between a repelled threat and a security breach. Yet most organizations do not have this capability. Lastly, companies continue to overlook the need to monitor and understand the communication patterns between API endpoints and application services. An API might be functioning as intended, but if its communication pattern is anomalous or its interactions with other services are unexpected, it could be an indicator of underlying vulnerabilities or misconfigurations.


AI Safety? Rishi Sunak is all in for Elon Musk's work-free fantasy

“There will come a point where no job is needed,” Musk said. “You can have a job if you want to have a job, for personal satisfaction, but the AI will be able to do everything.” In this world, everyone would have what they wanted: “Not a universal basic income, we'll have universal high income.” Musk didn’t give any ideas on how that world will appear, either because he didn’t think he had to, or he didn’t want to face up to the idea that billionaires might have to learn to share. Instead, he cited the Civilisation science fiction novels of Iain Banks, which really just do the same thing better, placing people in a future quasi-Utopia, without giving any suggestion how society transferred from capitalism to a world of freely available high-tech. The real frustration of the AI Safety Summit, on display in the Musk-Sunak show, is that knowledge is power. Musk and the tech billionaires have both, while our elected representatives have neither [even if Sunak and his wife are personally close to billionaire status themselves].

Digital risk: Time to merge cyber security and data privacy

Taking an integrated business approach to managing digital risk delivers a number of key benefits to organisations. Firstly, it can help to bring forward digital transformation initiatives because the data classification and compliance that companies are undertaking across the business for various purposes is aligned and coordinated. Secondly, a digital risk function that conducts comprehensive assessments of third-party and supply chain digital risk is better positioned to ensure that risk is considered across the organisation. One way to do this is by pre-approving vendors from a risk perspective. “Businesses can digitally transform quicker if they do the supplier approval process up front,” says James Arthur, Partner, Head of Cyber Consulting, Grant Thornton UK. “It’s a lot easier to do this if you have a single digital risk function that proactively assesses cyber security and privacy risk together.” Thirdly, businesses continue to use new technologies to seek out commercial advantage, meaning their approach to data privacy and cyber security also needs to continually evolve, to address new threats and vulnerabilities. 


Where Does Cybersecurity Fit Into the Acquisition Process?

Acquiring and integrating an outside company also means inheriting a brand-new set of cybersecurity risks -- both direct and third-party. “If we make an acquisition, a lot of our customers will request to gain some understanding of the security of the company [we] acquired,” Huber explains. How will a company manage those newly acquired risks? Answering that question takes time and comes with a learning curve. Due diligence plays a big role in uncovering those risks, but the possibility that an unknown risk will emerge following the closing of a deal is almost certain. “I think that is always going to happen,” says Huber. “It’s not [a challenge] you can really plan for other than knowing that something’s going to happen.” Acquisitions can take months or quarters from deal consideration to closing. The first part of that process involves vetting the potential fit from business and technical perspectives. Once an acquisition appears to be a promising fit, the acquiring organization must go through its entire due diligence playbook to understand the opportunities and risks associated with its target.



Quote for the day:

"If it wasn't hard, everyone would do it. The hard is what makes it great." -- Tom Hanks

Daily Tech Digest - November 04, 2023

AI’s Role In Payments 3.0: Balancing Innovation With Responsibility

At the core of the responsible use of AI is protecting individuals and their data. After all, one of the most personal things to people is their financial information. It’s critical for businesses to work with data experts who know how to keep customers’ private information safe while appropriately using payment data to enable personalized service. ... One of AI’s greatest advantages is its ability to scan and analyze large amounts of data and suggest or implement improved experiences related to payments. One could argue that AI might be able to handle these tasks better than humans, not only because it can do so quickly, but because it eliminates the biases humans can impose. However, is that true in practice? Unfortunately, not always. Responsible AI depends upon first choosing the most accurate, audited and unbiased data sources available. Then it must use a system of audits during the development and implementation of the machine learning model—and frequently thereafter—to detect and correct for inappropriate biased decision-making. Another compelling reason to weed out AI bias is compliance.


Decoding Kafka: Why It’s Worth the Complexity

First off, learning Kafka requires time and dedication. Newcomers might take a few days or weeks to grasp the basics and months to master advanced features and concepts. In addition, you need to constantly monitor and learn from the cluster’s performance as well as keep up with Kafka’s evolution and new features being released. Setting up your Kafka deployment can be challenging, expensive and time-consuming. This process can take anywhere between a few days and a few weeks, depending on the scale and the specifics of the setup. You may even decide that a dedicated platform team will need to be created specifically to manage Kafka. ... Kafka is more than a simple message broker. It offers additional capabilities like stream processing, durability, flexible messaging semantics and better scalability and performance than traditional brokers. While its superior characteristics increase complexity, the trade-off seems justified. Otherwise, why would numerous companies worldwide use Kafka? 


The Tech Gold Rush Is Over. The Search for the Next Gold Rush Is On

The tech industry will probably shrink, too. It will still be an important part of the economy, but employing fewer people and offering more normal returns. Now the question is where young people seeking wealth will turn next. From the Age of Discovery to the actual California Gold Rush to Snapchat, it is human nature to chase fortune where you can. It is a large part of what moves an economy forward. But these modern settings for this quest — the finance industry and the tech industry — are unusual in that they attracted many people who expected to get rich even if they lacked two things normally associated with extreme wealth creation: creating lots of value for the economy, and taking smart risks. The fact that anyone thinks things should be different is simply the result of the historically low interest rates of the last few decades, which helped make capital incredibly cheap. The price of capital is the price of risk, and if it is effectively zero, then it stands to reason that easy fortunes can be made risk-free.


To Improve Cyber Defenses, Practice for Disaster

The primary challenge organizations face when executing crisis simulations is determining the right level of difficulty, says Tanner Howell, director of solutions engineering at RangeForce. "With threat actors ranging from script kiddies to nation-states, it's vital to strike a balance of difficulty and relevance," he says. "If the simulation is too simple, it won't effectively test the playbooks. Too difficult, and team engagement may decrease." Walters says organizations should expand simulations beyond technical aspects to include regulatory compliance, public relations strategies, customer communications, and other critical areas. "These measures will help ensure that crisis simulations are comprehensive and better prepare the organization for a wide range of cybersecurity scenarios," he notes. Taavi Must, CEO of RangeForce, says organizations can implement some key best practices to improve team collaboration, readiness, and defensive posture. "Managers can perform business analysis to identify the most applicable threats to the organization," he says. "This allows teams to focus their already precious time around what matters most to them."


Going All-in With Evergreen Cloud Adoption Brings Its Own Challenges

An evergreen IT strategy on the other hand, keeps a cloud-based system online throughout to allow businesses to conduct smaller, more regular updates either weekly, biweekly, or monthly. These can be planned at optimum times to avoid downtime, support business continuity, and mitigate against lost profits. Disruption during transformation is always a risk, so incremental changes also mean pockets of disruption can be easier to isolate and resolve. An MSP expert has the time and knowledge necessary to craft a bespoke strategy for transformation, which builds-in operational resilience and offsets potential downtime. For instance, round-the-clock help desks mean businesses can access support and advice whenever they need it, to allow the fastest resolution of problems during the onboarding process. ... MSPs don’t just offer support during implementation, they have a wealth of systems knowledge that businesses can exploit to assist with automation updates, faults, and internal process reviews. For industries such as the manufacturing sector, downtime can account for 5% – 20% of working time, with lost productivity costing up to £180 billion a year. 


Ongoing supply chain disruption continues to take its toll

Firstly, there appears to have been little sign of easing on some supply chains, with 86 percent of respondents stating that they had experienced supply chain volatility over the past year, slightly down on the same level who reported the same in winter 2022 but similar to the levels who reported the same one year ago. Once again, the professionals responsible for delivering new data center facilities to the market have been significantly impacted by the volatility in the supply chain. Our survey of developer/investor respondents revealed that some 91 percent of them confirmed being significantly affected. This figure represents an increase from the 82 percent reported six months ago and 83 percent recorded a year ago. Notably, among our DEC stakeholders, the impact was more pronounced, with 93 percent expressing their strong agreement, compared to 70 percent in Q4 2022. Amongst our service providers, there is still a high level of agreement regarding this disruption albeit with some marginal easing; 92 percent stated that they had experienced such supply chain problems, down from 97 percent reported six months ago.


What Does a Data Scientist Do?

As the field of Data Science continues to evolve, so too does its potential to transform industries and revolutionize decision-making processes. While descriptive and predictive analytics have long been utilized to gain insights from historical data and make informed predictions about future outcomes, the current emphasis is on prescriptive analytics. Prescriptive analytics takes data analysis to a whole new level by not only providing insights into what might happen but also offering recommendations on how to make it happen. By leveraging advanced algorithms, machine learning techniques, and artificial intelligence, data scientists can now go beyond simply understanding patterns and trends. They can provide actionable suggestions that optimize decision-making processes. The impact of prescriptive analytics is far-reaching across numerous sectors. For example, in healthcare, it can help physicians determine personalized treatment plans based on patient data. In supply chain management, it can optimize inventory levels and streamline logistics operations. In finance, it can assist with risk management strategies.


From automated to autonomous, will the real robots please stand up?

Today's robots, despite not being as versatile as Mr. Data, are generally quite useful and functional. These include industrial robots, medical robots, military and defense robots, domestic robots, entertainment robots, space exploration and maintenance robots, agricultural robots, retail robots, underwater robots, and telepresence robots that help people participate in an activity from a distance. My personal interest has been focused on robots available and accessible to makers and hobbyists, robots that can empower individuals to build, design, and prototype projects previously only feasible by those with a shop full of fabrication machinery. I'm talking about 3D printers, which build up objects from layers of molten plastic; CNC devices, which often cut, carve, and remove wood or metal to create objects; laser cutters, which are ideal for sign cutting, engraving, and fabricating very detailed parts and circuit boards; and even vinyl cutters, for carefully cutting light, flexible material in intricate patterns. These machines are programmed using CAD software to define -- aka, design -- the object being built.


The Misleading Use of the ‘Technical Debt’ expression

The good software is the one that makes users happy and has the capacity to be improved when necessary. However some tech debts emerge from this need to evolve the product. I believe when the software does a good job in solving some problem, it probably needs to scale — as more people would use it, more features would be necessary and so on. When that happens, some characteristics of your software will need to change and something that usually works just fine will become a debt — If you don’t care about it soon enough — your great software will be called a legacy one. Software and its architecture should be evolutive. It’s important to map tech opportunities because it can become a technical debt in future but we don’t need to apply all immediately if it doesn’t fit the current scenario. I’ve seen many managers saying ‘From now on let’s never do a feature with debts’ — that’s for me a very incomprehension of the reality in software development. You will need to make debts to achieve the delivery time and have success in software and your business, but also debts will always emerge from a change of scenario of your product. 


What is a digital twin? And why is everyone suddenly building one?

Digital twins of vehicles, factories, and cities already exist, but could there ever be a digital twin of you? It’s very possible, especially as health technology, biosensors, and AI’s predictive ability improves. It’s easy to envision a day when every person has a digital health twin that mirrors our physical and genetic makeup and lifestyle, a doppelgänger we can feed hypothetical inputs to and see what effects they may have on our bodies. For example, a digital health twin could show us how adhering to a vegetarian diet would impact our specific body over the next 20 years. Or our digital health twin might be able to reveal the effect that physical stresses on our body will have as we continue to age—what another 10 years of sitting at our desk at work will do to our spine, for example. A digital twin may even be able to show us which treatment option for a disease is better for us by revealing how our unique body would react if we chose to undertake one treatment plan instead of another, say medication versus surgery. All this sounds far-fetched, but the digital twinning of organs and entire bodies is already well underway in the healthcare industry.



Quote for the day:

“Failure defeats losers, failure inspires winners.” -- Robert T. Kiyosaki

Daily Tech Digest - November 03, 2023

Embracing the co-evolution: AI's role in enriching the workforce

AI's capabilities in data processing and predictive analytics are undeniably impressive, yet it falls short in embodying human experience–empathy, contextual comprehension, and emotional intelligence. This raises the question: what can AI achieve without human involvement? For example, several automotive companies are adopting LLMs into their vehicles and systems. They use it to conduct routine checks and assist with on-road safety and predictive maintenance. But in this case, AI cannot fix any of the problems that it detects. To ensure that the challenges detected by AI are addressed, businesses will always need skilled human workers. ... If things with AI aren’t that bad, why is the popular narrative suggesting otherwise? The simple answer is, timing. The economic conditions coupled with the aftermath of the pandemic has left people bracing themselves for the next big disruption. Add the popularity of LLMs into the mix, and you have what seems like the next catastrophe. But that’s far from the truth.


Burnout: An IT epidemic in the making

Even among those who report low or moderate levels of burnout, 25% express a desire to leave their company in the near future. And burnout is also impacting skills acquisition, as 43% of Yerbo survey respondents said they had to stop studying for a certification exam because they were unable to find time due to their workloads. Further, burned-out employees who do leave are highly likely to negatively impact your company’s reputation by sharing their frustrations online and on review sites, where other potential candidates can see them. With tech talent markets always tight, increased burnout within your organization can quickly become not only a retention issue, but a recruitment problem as well. ... Burnout can’t be fixed overnight. Turning around burnout in your organization will require consistency and dedication to improving the employee experience. You’ll need to consider increases in resources, mentoring, opportunities for advancement, as well as evaluating boundaries around work-life balance and ensuring that a healthy balance is reflected and modeled all the way to the top.


Edge and beyond: How to meet the increasing demand for memory

What is needed is a way to improve direct access to offboard memory by providing on-demand access to memory across servers. The industry has recognized this and has been working on a software-defined memory solution for many years in the form of CXL. However, CXL 3.0, which provides complete caching capability, is still several years away, will require new server architecture, and will only be available in forthcoming generations of hardware. Concerns about latency compromises are surfacing, too. Even CXL 3.0 is still piggybacking on the PCI Express (PCIe) physical layer and relying on physical memory paired with PCIe, so one would ordinarily incur a penalty on a key critical metric—latency. Generally, the farther the memory is from the CPU, the higher the latency and the poorer the performance. Workloads at the heart of everything from HPC to AI have significant memory requirements. But designers struggle to make use of the additional cores available in modern CPUs. The leap forward in the number of CPU cores is mismatched with a lack of memory bandwidth.


The False Dichotomy of Monolith vs. Microservices

Sure, microservices are more difficult to work with than a monolith -- I’ll give you that. But that argument doesn’t pan out once you’ve seen a microservices architecture with good automation. Some of the most seamless and easy-to-work-with systems I have ever used were microservices with good automation. On the other hand, one of the most difficult projects I have worked on was a large old monolith with little to no automation. We can’t assume we will have a good time just because we choose monolith over microservices. Is the fear of microservices a backlash to the hype? Yes, microservices have been overhyped. No, microservices are not a silver bullet. Like all potential solutions, they can’t be applied to every situation. When you apply any architecture to the wrong problem (or worse, were forced to apply the wrong architecture by management), then I can understand why you might passionately hate that architecture. Is some of the fear from earlier days when microservices were genuinely much more difficult? 


The Software Testing Odyssey That You Need to Take

Let’s illustrate a practical scenario where a financial services company is adding new transactional functionalities to its application. Its team uses AI-powered test creation to transform its user stories and requirements into functional test scripts. The AI uses natural language processing to analyze descriptions of test requirements and convert them into executable scripts that simulate user interactions within the banking application. During testing, which is automated and runs at predefined times, a minor application layout UI change occurs. This results in a number of tests failing as the pre-existing automated tests cannot locate the update element. This is where AI-powered self-healing comes in. The AI algorithm, powered by classification AI techniques, will inspect the failed tests meticulously and compare them with previous test versions. Through this analysis, the AI identifies the UI element change that caused the failures and autonomously updates the test scripts with new locators for the UI element changes.


3 Ways of Protecting Your Public Cloud Against DDoS Attacks

While basic DDoS protection offered by CSPs is free, more advanced, or comprehensive protection options come with additional costs. This becomes quite expensive because you will need to pay a monthly fee for each account or resource, and if you need more visibility into the traffic, you must turn on and pay for an additional service. All the additional charges add up quick and turn out to be quite expensive. Best for: All in all, the native DDoS protection offered by cloud service providers offers basic protection which provides good coverage for most network-layer attacks. This will be good for those looking for cheap, no hassle, integrated protection with low latency. ... Third-party DDoS mitigation services are best for organizations looking for dedicated, advanced DDoS protection, particularly of missions-critical applications. It is also suitable for organizations which are frequently attacked, and need constant, high-grade protection. In summary, DDoS protection is a fundamental component of cybersecurity in public cloud environments. 


What is data security posture management?

As defined by Gartner, “data security posture management (DSPM) provides visibility as to where sensitive data is, who has access to that data, how it has been used and what the security posture of the data stored or application is.” The DSPM approach aims to help organizations in three ways to improve their security posture: cloud data visibility, cloud data movement and cloud data protection. Cloud data visibility: Discover shadow data rapidly expanding in the cloud with autonomous data discovery. This capability provides a powerful and frictionless way to find data that sprawls within cloud service providers and Software-as-a-Service (SaaS) apps. Understanding where your data resides helps to shrink your attack surface and reduce data risks. Cloud data movement: Analyze potential and actual data flows across the cloud. Identifying where and how data moves will help provide clarity on which data access controls and policies can best prevent vulnerabilities and misconfigurations. Cloud data protection: Uncover vulnerabilities in data and compliance controls and posture. 


Harnessing Conflict To Create An Ideal Company Culture

The ideal work environment encourages open communication and provides psychological safety for team members to share their views and opinions in a respectful way. Cultivating this type of workplace takes time, practice and training. Effective communication is a skill that not all employees are taught, especially when it comes to expressing dissent or differing points of view. Occasional training, coaching sessions and/or other materials may be necessary to teach team members how to communicate respectfully. Courses can walk through theoretical conversations and provide practical tips on how to thoughtfully explain one’s point of view without offense or personally attacking those who see things differently. Coaching sessions could also be a valuable resource so that teammates can have a person available to help them evaluate real-life scenarios that they may encounter. Often business coaching can include role-play in those scenarios that allow people to practice their new skills. Successful leaders acknowledge and appreciate a diversity of voices—even the dissenters—in their company culture.


The House of Data and Data Stewardship with Dr. James Barker

“The House of Data is loosely based on Toyota’s House of Quality, which was a hot topic when I got my master’s degree,” Barker began. “When I was at Honeywell getting their first data governance council going, we had a diagram that included things such as master data management (MDM), data quality, standards, and enforcement as part of it, but it really wasn’t resonating with people. Then I saw an example of a pillar diagram at a conference and took it back to my team to apply to our work.” The original House of Data diagram had four pillars -- data quality, data security, MDM, and compliance -- with data architecture as the floor and the governance council itself as the roof. ... At the primary level, you have your lead data stewards working together to keep things moving forward, whether aligned around a specific function (such as finance or manufacturing) or around a line of business. This type of council works best at large organizations, includes a mix of LOB and functional representation, and often meets weekly to stay up to date on what’s working and what’s not. 


Why IT and Cybersecurity Need Apprenticeships Now More Than Ever

From the apprentice’s perspective, this pathway promises numerous benefits: acquisition of in-demand skills, paid learning opportunities, valuable field-specific experience, and networking avenues with potential employers. Often, apprenticeships culminate in full-time job offers, presenting a clear trajectory for career advancement. Businesses, in parallel, stand to gain significantly. Through apprenticeships, they can nurture a workforce tailored to their unique needs, potentially reducing turnover, diversifying their teams, and boosting overall morale and productivity. However, hiring apprentices is not the slam-dunk some government agencies make it out to be. Although companies can be reimbursed for the training costs for registering an apprentice program, participation has drawbacks. The application process is time-consuming, and most states require an Apprenticeship Governance Board to approve or reject an application. While this process is admirable to retain rigor in programs, it can be streamlined. After successful registration, there are compliance steps, related training and instruction, and mentor assignments. 



Quote for the day:

“Identify your problems but give your power and energy to solutions.” -- Tony Robbins

Daily Tech Digest - November 02, 2023

How Banks Can Turn Risk Into Reward Through Data Governance

To understand why data governance is critical for banks, we must understand the underlying challenges facing financial services organizations as they modernize. Rolling out new cloud applications or Internet of Things (IoT) devices into an environment where legacy on-premises systems are already in place means more data silos and data sets to manage. Often, this results in data volumes, variety, and velocity increasing much too quickly for banks. This gives rise to IT complexity—driven by technical debt or the reliance on systems cobbled together and one-off connections. Not only that, it also raises the specter of 'shadow IT' as employees look for workarounds to friction in executing tasks. This can create difficulties for banks trying to identify and manage their data assets in a consistent, enterprise-wide way that is aligned with business strategy. Ultimately, barely controlled data leads to errant financial reporting, data privacy breaches, and non-compliance with consumer data regulations. Failing to counter these risks can lead to fines, hurt brand image, and trigger lost sales. 


Key Considerations for Developing Organizational Generative AI Policies

It's crucial to ensure that all relevant stakeholders have a voice in the process, both to make the policy comprehensive and actionable and to ensure adherence to legal and ethical standards. The breadth and depth of stakeholders involved will depend on the organizational context, such as, regulatory/legal requirements, the scope of AI usage and the potential risks associated (e.g., ethics, bias, misinformation). Stakeholders offer technical expertise, ensure ethical alignment, provide legal compliance checks, offer practical operational feedback, collaboratively assess risks, and jointly define and enforce guiding principles for AI use within the organization. Key stakeholders—ranging from executive leadership, legal teams and technical experts to communication teams, risk management/compliance and business group representatives—play crucial roles in shaping, refining and implementing the policy. Their contributions ensure legal compliance, technical feasibility and alignment with business and societal values.x


CIOs sharpen cloud cost strategies — just as gen AI spikes loom

One key skill CIOs are honing to lower costs is their ability to negotiate with cloud providers, said one CIO who declined to be named. “People better understand the charges, and [they] better negotiate costs. After being in cloud and leveraging it better, we are able to manage compute and storage better ourselves,” said the CIO, who notes that vendors are not cutting costs on licenses or capacity but are offering more guidance and tools. “After some time, people have understood the storage needs better based on usage and preventing data extract fees.” Thomas Phelps, CIO and SVP of corporate strategy at Laserfiche, says cloud contracts typically include several “gotchas” that IT leaders and procurement chiefs should be aware of, and he stresses the importance of studying terms of use before signing. ... CIOs may also fall into the trap of misunderstanding product mixes and the downside of auto-renewals, he adds. “I often ask vendors to walk me through their product quote and explain what each product SKU or line item is, such as the cost for an application with the microservices and containerization,” Phelps says. 


Misdirection for a Price: Malicious Link-Shortening Services

Security researchers gave the service the codename "Prolific Puma." They discovered it by identifying patterns in links being used by some scammers and phishers that appeared to trace to a common source. The service appears to be have active since at least 2020 and regularly is used to route victims to malicious domains, sometimes first via other link-shortening service URLs. "Prolific Puma is not the only illicit link shortening service that we have discovered, but it is the largest and the most dynamic," said Renee Burton, senior director of threat intelligence for Infoblox, in a new report on the cybercrime service. "We have not found any legitimate content served through their shortener." Infoblox, a Santa Clara, California-based IT automation and security company, published a list of 60 URLs it has tied to Prolific Puma's attacks. The URLS employ such domains as hygmi.com, yyds.is, 0cq.us, 4cu.us and regz.information. Infoblox said many domains registered by the group are parked for several weeks while being used, since many reputation-based security defenses will treat freshly registered domains as more likely to be malicious.


DNS security poses problems for enterprise IT

EMA asked research participants to identify the DNS security challenges that cause them the most pain. The top response (28% of all respondents) is DNS hijacking. Also known as DNS redirection, this process involves intercepting DNS queries from client devices so that connection attempts go to the wrong IP address. Hackers often achieve this buy infecting clients with malware so that queries go to a rogue DNS server, or they hack a legitimate DNS server and hijacks queries as more massive scale. The latter method can have a large blast radius, making it critical for enterprises to protect DNS infrastructure from hackers. The second most concerning DNS security issue is DNS tunneling and exfiltration (20%). Hackers typically exploit this issue once they have already penetrated a network. DNS tunneling is used to evade detection while extracting data from a compromised. Hackers hide extracted data in outgoing DNS queries. Thus, it’s important for security monitoring tools to closely watch DNS traffic for anomalies, like abnormally large packet sizes. The third most pressing security concern is a DNS amplification attack (20%). 


Data governance that works

Once we've found our targeted business initiatives and the data is ready to meet the needs of those initiatives, there are three major governance pillars we want to address for that data: understand, curate, and protect. First, we want to understand the data. That means having a catalog of data that we can analyze and explain. We need to be able to profile the data, to look for anomalies, to understand the lineage of that data, and so on. We also want to curate the data, or make it ready for our particular initiatives. We want to be able to manage the quality of the data, integrate it from a variety of sources across domains, and so on. And we want to protect the data, making sure we comply with regulations and manage the life cycle of the data as it ages. More importantly, we need to enable the right people to get to the right data when they need it. AWS has tools, including Amazon DataZone and AWS Glue, to help companies do all of this. It's really tempting to attack these issues one by one and to support each individually. But in each pillar, there are so many possible actions that we can take. This is why it's better to work backwards from business initiatives.


EU digital ID reforms should be ‘actively resisted’, say experts

The group’s concerns over the amendments largely centre on Article 45 of the reformed eIDAS, where it says the text “radically expands the ability of governments to surveil both their own citizens and residents across the EU by providing them with the technical means to intercept encrypted web traffic, as well as undermining the existing oversight mechanisms relied on by European citizens”. “This clause came as a surprise because it wasn’t about governing identities and legally binding contracts, it was about web browsers, and that was what triggered our concern,” explained Murdoch. ... All websites today are authenticated by root certificates controlled by certificate authorities, which assure the user that the cryptographic keys used to authenticate the website content belong to the website. The certificate owner can intercept a user’s web traffic by replacing these cryptographic keys with ones they control, even if the website has chosen to use a different certificate authority with a different certificate. There are multiple cases of this mechanism having been abused in reality, and legislation to govern certificate authorities does exist and, by and large, has worked well.


The key to success is to think beyond the obvious, to innovate and look for solutions

AI systems, including machine learning models, make critical decisions and recommendations. Ensuring the accuracy and reliability of these AI models is paramount. AI heavily relies on data and ensuring data quality, integrity, and consistency is a crucial task. Data pre-processing and validation are necessary steps to make AI models work effectively. Integration of software testing in the software development life cycle helps identify and rectify issues that could lead to incorrect predictions or decisions, minimizing the risks associated with AI tools. AI models are susceptible to adversarial attacks and robust security testing helps identify vulnerabilities and weaknesses in AI systems, protecting them from cyber threats and ensuring the safety of automated processes. Testing is not a one-time effort; it’s an ongoing process. Regular testing and monitoring are necessary to identify issues that may arise as AI models and automated systems evolve. High-quality, well-tested AI-driven automation can provide a competitive advantage.


We built a ‘brain’ from tiny silver wires.

We are working on a completely new approach to “machine intelligence”. Instead of using artificial neural network software, we have developed a physical neural network in hardware that operates much more efficiently. ... Using nanotechnology, we made networks of silver nanowires about one thousandth the width of a human hair. These nanowires naturally form a random network, much like the pile of sticks in a game of pick-up sticks. The nanowires’ network structure looks a lot like the network of neurons in our brains. Our research is part of a field called neuromorphic computing, which aims to emulate the brain-like functionality of neurons and synapses in hardware. Our nanowire networks display brain-like behaviours in response to electrical signals. External electrical signals cause changes in how electricity is transmitted at the points where nanowires intersect, which is similar to how biological synapses work. There can be tens of thousands of synapse-like intersections in a typical nanowire network, which means the network can efficiently process and transmit information carried by electrical signals.


Why public/private cooperation is the best bet to protect people on the internet

Neither the FTC nor the SEC was empowered by Congress with responsibility for cyberspace, and both have relied on pre-existing authorities related to corporate representations to bring actions against individuals who did not have corporate duties managing legal or external communications. They are using the tools at their disposal to change expectations, even if it means bringing a bazooka to a knife fight. These cases make CISOs worried that in addition to being technical experts they also need to personally become experts on data breach disclosure laws and experts on SEC reporting requirements rather than trusting their peers in the legal and communications departments of their organizations. What we need is a real partnership between the public and the private sector, clear rules and expectations for IT professionals and law enforcement, and an executive branch that will attempt regulation through rulemaking rather than through ugly and costly enforcement actions that target IT professionals for doing their jobs and further deepens the adversarial public-private divide.



Quote for the day:

"Leadership is working with goals and vision; management is working with objectives." -- Russel Honore

Daily Tech Digest - October 31, 2023

Do programming certifications still matter?

Hiring is one area where programming certifications definitely play a role. “One of the key benefits of having programming certifications is that they provide validation of a candidate's skills and knowledge in a particular programming language, framework, or technology,” says Aleksa Krstic, CTO at Localizely, a provider of a cloud-based translation platform. “Certifications can demonstrate that the individual has met certain standards and has the expertise required to perform a specific job.” For employers, programming certifications offer several advantages, Krstic says. “They can help streamline the hiring process, by providing a benchmark for assessing candidates' skills and knowledge,” he says. “Certifications can also serve as a way to filter out applicants who do not meet the minimum requirements.” In cases where multiple candidates are equally qualified, having a relevant certification can give one candidate an edge over others, Krstic says. “When it comes to certifications in general, when we see a junior to mid-level developer armed with programming certifications, it's a big green light for our hiring team,” says MichaÅ‚ Kierul is the CEO of software company SoftBlue


Overseeing generative AI: New software leadership roles emerge

In addition to line-of-business expertise, the rise of AI will mean there is also a growing focus on prompt engineering and in-context learning capabilities. Databricks' Zutshi says, "This is a newer ability for developers to optimize prompts for large language models and build new capabilities for customers, further expanding the reach and capability of AI tools." Yet another area where software leaders will need to take the lead is AI ethics. Software engineering leaders "must work with, or form, an AI ethics committee to create policy guidelines that help teams responsibly use generative AI tools for design and development," Gartner's Khandabattu reports in her analysis. Software leaders will need to identify and help "to mitigate the ethical risks of any generative AI products that are developed in-house or purchased from third-party vendors." Finally, recruiting, developing, and managing talent will also get a boost from generative AI, Khandabattu adds. Generative AI applications can speed up hiring tasks, such as performing a job analysis and transcribing interview summaries.


Generative Agile Leadership: What The Fourth Industrial Revolution Needs

Expanding the metaphor of the head, heart and hands, I've developed eight generative agile leadership (GAL) principles; they are the structure needed to create resilient teams of happy, contributing people who amplify satisfied customers and deliver outcomes for a thriving business. ... The GAL principles come from Peter Senge's learning organization and Ron Westrum’s organizational cultures. The learning organization is an adaptive entity that expands the capabilities of people and the whole system. The generative model is a performance-oriented organizational culture to ensure that people have high trust and low blame to increase the ability to express new ideas. ... The great-person leadership style that emphasizes that leaders are made and not born will not age well in the 4IR. The human-centered generative leadership model is the best approach to leading the four generations. The GAL principles are rooted in the idea that leaders should help their employees grow and develop as individuals. Generative leaders focus on creating a learning environment and providing their employees with opportunities to reach their full potential.


IT Must Clean Up Its Own Supply Chain

At the end of our supply chain “clean up” exercise, we were pleased that we had gained a good handle on our vendor services and products. This would enable us to operate more efficiently. We were also determined to never fall into this supply chain quagmire again! To avoid that, we created a set of ongoing supply chain management practices designed to maintain our supply chain on a regular basis. We met regularly with vendors, designed a “no exceptions” contract review as part of every RFP process, and no longer settled for boilerplate vendor contracts that didn’t have expressly stated SLAs. We also made it a point to attend key vendor conferences and to actively participate in vendor client forums, because we believed it would give us an opportunity to influence vendor product and service directions so they could better align with our own. End to end, this exercise consumed time, and resources, but it succeeded in capturing our attention. Attention to IT supply chains is even more relevant today as IT increasingly gets outsourced to the cloud


‘Data poisoning’ anti-AI theft tools emerge — but are they ethical?

Hancock said genAI development companies are waiting to see how aggressive “or not” government regulators will be with IP protections. “I suspect, as is often the case, we’ll look to Europe to lead here. They’re often a little more comfortable protecting data privacy than the US is, and then we end up following suit,” Hancock said. To date, government efforts to address IP protection against genAI models are at best uneven, according to Litan. “The EU AI Act proposes a rule that AI model producers and developers must disclose copyright materials used to train their models. Japan says AI generated art does not violate copyright laws,” Litan said. “US federal laws on copyright are still non-existent, but there are discussions between government officials and industry leaders around using or mandating content provenance standards.” Companies that develop genAI are more often turning away from indiscriminate scraping of online content and instead purchasing content to ensure they don’t run afoul of IP statutes. That way, they can offer customers purchasing their AI services reassurance they won’t be sued by content creators.


SEC sues SolarWinds and its CISO for fraudulent cybersecurity disclosures

The SolarWinds case could act as a pivotal point for the role of a CISO, transforming it into one that requires a lot more scrutiny and responsibility. "SolarWinds incident highlights the responsibility of CISOs of publicly listed companies in not only managing the cyberattacks but also proactively informing customers and investors about their cybersecurity readiness and controls," said Pareekh Jain, chief analyst at Pareekh Consulting. "This lawsuit highlights that there were red flags earlier that the CISO failed to disclose. This will make corporations and CISOs take notice and take proactive security disclosure more seriously similar to how CFOs take financial information disclosure seriously." "There are many unknowns here; we don’t know if the CISO 'succumbed' to pressure from other leaders or if he was complicit in the hack," said Agnidipta Sarkar, vice president for CISO Advisory at ColorTokens Inc. "In either case, he is the target. But the reality is that the CISO is a very complex role. We are constantly required to navigate internal politics and pushbacks, and unless you are on your toes, you will be at the mercy of external forces at a scale no other CXO is exposed to."


Why adaptability is the new digital transformation

Sustainability and resilience are mature management disciplines because a lot of attention has been paid to developing strategies and implementing solutions to address them. When it comes to adaptability, however, apart from agile methodologies and adaptation as it relates to climate change, there’s very little to learn from in terms of the body of work, which is why I addressed this issue in “A Guide to Adaptive Government: Preparing for Disruption.” Adaptive systems and resilient systems are often confused and thought of as interchangeable, but there’s a vast difference between the two concepts. Whereas an adaptive system restructures or reconfigures itself to best operate in and optimize for the ambient conditions, a resilient system often simply has to restore or maintain an existing steady state. In addition, whereas resilience is a risk management strategy, adaptability is both a risk management and an innovation strategy. The philosophy behind adaptive systems is more about innovation than risk management. It assumes from the start, that there are no steady state conditions to operate within, but that the external environment is constantly changing.


Bringing Harmony to Chaos: A Dive into Standardization

Companies with different engineering teams working on various products often emphasize the importance of standardization. This process helps align large teams, promoting effective collaboration despite their diverse focuses. By ensuring consistency in non-functional aspects, such as security, cost, compliance and observability, teams can interact smoothly and operate in harmony, even with differing priorities. Standardizing these non-functional elements is key for maintaining system strength and resilience. It helps in setting consistent guidelines and practices across the company, minimizing conflicts. The aim is to seamlessly integrate standardization within these elements to improve adaptability and consistency. However, achieving this standardization isn’t easy. Differences in operational methods can lead to inconsistencies. ... The aim of standardization is to create smooth and uniform processes. However, achieving this isn’t always easy. Challenges arise from different team goals, changing technology and the tendency to unnecessarily create something new.


When tightly managing costs, smart founders will be rigorous, not ruthless 

Instead of ruthless, indiscriminate cost-cutting, it is wise to be very frugal about what doesn’t matter while you continue maintaining or even moderately investing in the things that do matter. When making cuts, never lose sight of your people. They’re anxious about the future, and you can’t expect to add more stress and excessive demands to already-stressed workers. ... The outright elimination of things like team lunches, in-person meetings and little daily perks creates instant animosity. Thoughtful cuts instead create visible and tangible reminders of the current environment, especially when considering how important in-person gatherings are to sustaining a robust culture in a remote work environment. Instead of quarterly in-person employee meetups, move to annual and replace the others with a DoorDash gift card and a video meeting. Curtailing all travel — both sales calls and team meetups — not only hurts morale, it allows justifiable excuses for missed targets, lost deals and churned customers.


A Beginner's Guide to Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation is a method that combines the powers of large pre-trained language models (like the one you're interacting with) with external retrieval or search mechanisms. The idea is to enhance the capability of a generative model by allowing it to pull information from a vast corpus of documents during the generation process. ... RAG has a range of potential applications, and one real-life use case is in the domain of chat applications. RAG enhances chatbot capabilities by integrating real-time data. Consider a sports league chatbot. Traditional LLMs can answer historical questions but struggle with recent events, like last night's game details. RAG allows the chatbot to access up-to-date databases, news feeds and player bios. This means users receive timely, accurate responses about recent games or player injuries. For instance, Cohere's chatbot provides real-time details about Canary Islands vacation rentals — from beach accessibility to nearby volleyball courts. Essentially, RAG bridges the gap between static LLM knowledge and dynamic, current information.



Quote for the day:

“Vulnerability is the birthplace of innovation, creativity, and change.” -- Brené Brown