Daily Tech Digest - February 18, 2022

TrickBot Ravages Customers of Amazon, PayPal and Other Top Brands

The TrickBot malware was originally a banking trojan, but it has evolved well beyond those humble beginnings to become a wide-ranging credential-stealer and initial-access threat, often responsible for fetching second-stage binaries such as ransomware. Since the well-publicized law-enforcement takedown of its infrastructure in October 2020, the threat has clawed its way back, now sporting more than 20 different modules that can be downloaded and executed on demand. It typically spreads via emails, though the latest campaign adds self-propagation via the EternalRomance vulnerability. “Such modules allow the execution of all kinds of malicious activities and pose great danger to the customers of 60 high-profile financial (including cryptocurrency) and technology companies,” CPR researchers warned. “We see that the malware is very selective in how it chooses its targets.” It has also been seen working in concert with a similar malware, Emotet, which suffered its own takedown in January 2021.


‘Ice phishing’ on the blockchain

There are multiple types of phishing attacks in the web3 world. The technology is still nascent, and new types of attacks may emerge. Some attacks look similar to traditional credential phishing attacks observed on web2, but some are unique to web3. One aspect that the immutable and public blockchain enables is complete transparency, so an attack can be observed and studied after it occurred. It also allows assessment of the financial impact of attacks, which is challenging in traditional web2 phishing attacks. Recall that with the cryptographic keys (usually stored in a wallet), you hold the key to your cryptocurrency coins. Disclose that key to an unauthorized party and your funds may be moved without your consent. Stealing these keys is analogous to stealing credentials to web2 accounts. Web2 credentials are usually stolen by directing users to an illegitimate web site through a set of phishing emails. While attackers can utilize a similar tactic on web3 to get to your private key, given the current adoption, the likelihood of an email landing on the inbox of a cryptocurrency user is relatively low.


Cloud Security Alliance publishes guidelines to bridge compliance and DevOps

As for tooling, CSA called for organisations to embrace infrastructure as-code to eliminate manual provisioning of infrastructure. They can do so through services such as AWS Cloud Formation or capabilities from the likes of Chef, Ansible and Terraform, paving the way for automation, version control and governance. Organisations can also establish guardrails to constantly monitor software deployments to ensure alignment with their goals and objectives, including compliance. These guardrails can be represented as high-level rules with detective and preventive policies. Guardrails may be implemented as a means of compliance reporting, such as the number of machines running approved operating systems (OSes), or as remedies to non-compliance, such as shutting down machines running unapproved OSes. With a tendency to address risk directly through tooling, organisations can easily overlook the importance of having the appropriate mindset in DevSecOps transformation. CSA defines mindset as the ways to bring security teams and software developers closer together.


Use of Artificial Intelligence in the Banking World 2022

Chatbots are one of the most-used applications of artificial intelligence, not only in banking but across the spectrum. Once deployed, AI chatbots can work 24/7 to be available for customers. In fact, in several surveys and market research studies, it has been found that people actually prefer interacting with bots instead of humans. This can be attributed to the use of natural language processing for AI chatbots. With NLP, AI chatbots are better able to understand user queries and communicate in a seemingly humane way. An example of AI chatbots in banking can be seen in the Bank of America with Erica, the virtual assistant. Erica handled 50 million client requests in 2019 and can handle requests including card security updates and credit card debt reduction. Digital-savvy banking customers today need more than what traditional banking can offer. With AI, banks can deliver the personalized solutions that customers are seeking. An Accenture survey suggested that 54% of banking customers wanted an automated tool to help monitor budgets and suggest real-time spending adjustments.
 

Data democratisation and AI—The superpowers to augment customer experience in 2022

With the powerful combination of data and AI at their fingertips, teams can gain deeper insights into their customers. Such technologies can also provide recommendations to the next-best-action. Critical decisions such as the right message, right channel, and time can be optimised to boost efficiency as well as delight consumers. For example, ecommerce brands can identify customers who buy from a specific luxury brand and personalise offers. Banks can determine customers who have not completed the onboarding journey and eliminate roadblocks to help them move towards completion. Music streaming apps can create custom playlists for each listener based on their preferred music and artists. In the past, these insights were gathered from multiple platforms, most times with the help of technology or data teams running Big Data queries. The time required to run these queries, draw insights, and then apply them was often long. Which meant, brands could not go to the market faster.


Federated Machine Learning and Edge Systems

It helps to look at a practical use case. We're going to look at Federated Learning of Cohorts, or FLoCs, also developed by Google. It's essentially a proposal to do away with third-party cookies, because they're awful and nobody likes them, and they are a privacy nightmare. Many browsers are removing functionality for third-party cookies or automatically blocking third-party cookies. But what should we use in order to do targeted, personalized advertising if we don't have third-party cookies? That's what Google proposed FLoCs would do. Their idea for FLoCs is that you get assigned a cohort based on something you like, your browsing history, and so on. In the diagram below, we have two different cohorts: a group that likes plums and a group that likes oranges. Perhaps, if you were a fruit seller, you might want to target the plum cohort with plum ads and the orange cohort with orange ads, for example. The goal was to resolve the privacy problems and the poor user experience of online targeted ads, where sometimes a user would click on something and it would follow them for days.


Inside Look at an Ugly Alleged Insider Data Breach Dispute

The Premier lawsuit alleges that changes to the company's security controls - including disabling endpoint security - allowed Sohail's continued access to Premier "trade secrets" and other sensitive information after his resignation as CIO. It says that Sohail "colluded with or coerced" Pakistan-based Sajid Fiaz, who served as Premier's IT administrator while also being employed at Wiseman Innovations as an IT infrastructure manager and HIPAA officer. Premier alleges that Fiaz's actions related to its data security provided Sohail "unfettered access to the master password for endpoint security that enabled that data theft and misuse through USB drives connected to secure IT systems." It says Sohail had unrestricted access to copying data to and from the company laptops and that a forensic report showed that he retained and accessed .PST files of emails from Premier after resigning as CIO in. ".PST files are an aggregated archive of all emails sent to and from an email address including all attachments," Premier says in court documents. 


How challenging is corporate data protection?

When employees quit their jobs, there is a 37% chance an organization will lose IP. With 96% of companies noting they experience challenges in protecting corporate data from insider risk, it’s clear insider risk must be prioritized. However, ownership of the problem remains vaguely defined. Only 21% of companies’ cybersecurity budgets have a dedicated component to mitigate insider risk, and 91% of senior cybersecurity leaders still believe that their companies’ Board requires better understanding of insider risk. “With employee turnover and the shift to remote and collaborative work, security teams are struggling to protect IP, source code and customer information. This research highlights that the challenge is even more acute when a third of employees who quit take IP with them when they leave. On top of that, three-quarters of security teams admit that they don’t know what data is leaving when employees depart their organizations,” said Joe Payne, Code42 president and CEO. “Companies must fundamentally shift to a modern data protection approach – insider risk management (IRM) – that aligns with today’s cloud-based, hybrid-remote work environment and can protect the data that fuels their innovation, market differentiation and growth.”


High-Severity RCE Bug Found in Popular Apache Cassandra Database

John Bambenek, principal threat hunter at the digital IT and security operations company Netenrich, told Threatpost on Wednesday that he suspects that the non-default settings are “common in many applications around the world.” The situation isn’t looking as bad as Log4j, but it could still potentially be widespread, and it’s going to be a chore to dig out vulnerable installations, Bambenek said via email. “Unfortunately, there is no way to know exactly how many installations are vulnerable, and this is likely the kind of vulnerability that will be missed by automated vulnerability scanners,” he said. “Enterprises will have to go into the configuration files of every Cassandra instance to determine what their risk is.” Casey Bisson, head of product and developer relations at code-security solutions provider BluBracket, told Threatpost that the issue could have “a broad impact with very serious consequences,” as in, “Threat actors may be able to read or manipulate sensitive data in vulnerable configurations.”


Top tips for entering an IT partnership for the first time

For businesses looking to strike up an IT partnership, it is crucial to ensure that potential partners are on the same page and are both working towards similar outcomes. Mutual visions lead to increased understanding, trust, and judgement throughout the project. Therefore, it is highly recommended for organisations to spend sufficient time when selecting their partner to understand their exact values, ideas, goals and ambitions. In order to accomplish this, it is recommended to physically visit potential partners, understand their culture and apply a human-to-human approach – whilst understanding that this is not a one-time project but an ongoing process to improve and build a solid relationship. Businesses must ensure the relevancy of the partner to the project by matching the usefulness of their skills, ideas, and experience to the client’s project. Establishing a partnership contrary to client needs will not only lead to customer dissatisfaction, but also to the overlooked understanding of the value of a partnership.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - February 17, 2022

Reflections on Failure, Part One

There are many reasons why failures in security are inevitable. As I wrote previously, the human minds practicing security are fatally flawed and will therefore make mistakes over time. And even if our reasoning abilities were free of bias, we would still not know everything there is to know about every possible system. Security is about reasoning under uncertainty for both attacker and defender, and sometimes our uncertainty will result in failure. None of us know how to avoid all mistakes in our code, all configuration errors, and all deployment issues. Further still, learning technical skills in general and “security” in particular requires a large amount of trial and error over time. But we can momentarily disregard our biased minds, the practically unbridgeable gap between what we can know and what is true, and even the simple need to learn skills and knowledge on both sides of the fence. The inevitability of failure follows directly from our earlier observation about conservation. If failure is conserved between red and blue, then every action in this space can be interpreted as one. 


APIs in Web3 with The Graph — How It Differs from Web 2.0

The Graph protocol is being built by a company called Edge & Node, which Yaniv Tal is the CEO of. Nader Dabit, a senior engineer who I interviewed for a recent post about Web3 architecture, also works for Edge & Node. The plan for the company seems to be to build products based on The Graph, as well as make investments in the nascent ecosystem. There’s some serious API DNA in Edge & Node. Three of the founders (including Tal) worked together at MuleSoft, an API developer company acquired by Salesforce in 2018. MuleSoft was founded in 2007, near the height of Web 2.0. Readers familiar with that era may also recall that MuleSoft acquired the popular API-focused blog, ProgrammableWeb, in 2013. Even though none of the Edge & Node founders were executives at MuleSoft, it’s interesting that there is a thread connecting the Web 2.0 API world and what Edge & Node hopes to build in Web3. There are a lot of technical challenges for the team behind The Graph protocol — not least of all trying to scale to accommodate multiple different blockchain platforms. Also, the “off-chain” data ecosystem is complex and it’s not clear how compatible different storage solutions are to each other.


Introducing Apache Arrow Flight SQL: Accelerating Database Access

While standards like JDBC and ODBC have served users well for decades, they fall short for databases and clients which wish to use Apache Arrow or columnar data in general. Row-based APIs like JDBC or PEP 249 require transposing data in this case, and for a database which is itself columnar, this means that data has to be transposed twice—once to present it in rows for the API, and once to get it back into columns for the consumer. Meanwhile, while APIs like ODBC do provide bulk access to result buffers, this data must still be copied into Arrow arrays for use with the broader Arrow ecosystem, as implemented by projects like Turbodbc. Flight SQL aims to get rid of these intermediate steps. Flight SQL means database servers can implement a standard interface that is designed around Apache Arrow and columnar data from the start. Just like how Arrow provides a standard in-memory format, Flight SQL saves developers from having to design and implement an entirely new wire protocol. As mentioned, Flight already implements features like encryption on the wire and authentication of requests, which databases do not need to re-implement.


The Graph (GRT) gains momentum as Web3 becomes the buzzword among techies

One of the main reasons for the recent increase in attention for The Graph is the growing list of subgraphs offered by the network for popular decentralized applications and blockchain protocols. Subgraphs are open application programming interfaces (APIs) that can be built by anyone and are designed to make data easily accessible. The Graph protocol is working on becoming a global graph of all the world’s public information, which can then be transformed, organized and shared across multiple applications for anyone to query. ... A third factor helping boost the prospects for GRT is the rising popularity of Web3, a topic and sector that has increasingly begun to make its way into mainstream conversations. Web3 as defined by Wikipedia is an “idea of a new iteration of the World Wide Web that is based on blockchain technology and incorporates concepts such as decentralization and token-based economics.” The overall goal of Web3 is to move beyond the current form of the internet where the vast majority of data and content is controlled by big tech companies, to a more decentralized environment where public data is more freely accessible and personal data is controlled by individuals.


Brain-Inspired Chips Good for More than AI, Study Says

Neuromorphic chips typically imitate the workings of neurons in a number of different ways, such as running many computations in parallel. ... Furthermore, whereas conventional microchips use clock signals fired at regular intervals to coordinate the actions of circuits, the activity in neuromorphic architecture often acts in a spiking manner, triggered only when an electrical charge reaches a specific value, much like what happens in brains like ours. Until now, the main advantage envisioned with neuromorphic computing to date was in power efficiency: Features such as spiking and the uniting of memory and processing resulted in IBM’s TrueNorth chip, which boasted a power density four orders of magnitude lower than conventional microprocessors of its time. “We know from a lot of studies that neuromorphic computing is going to have power-efficiency advantages, but in practice, people won’t care about power savings if it means you go a lot slower,” says study senior author James Bradley Aimone, a theoretical neuroscientist at Sandia National Laboratories in Albuquerque.


Could Biology Hold the Clue to Better Cybersecurity?

The framework is designed to inoculate a user from ransomware, remote code execution, supply chain poisoning, and memory-based attacks. "If we're going to change the way we protect assets, we need to take a completely different approach," says Dave Furneaux, CEO of Virsec. "Companies are spending more and more money on solutions and not seeing any improvement." Furneaux likens the approach to the mRNA technology that vaccine makers Moderna and Pfizer have used. "Once you determine how to adapt a cell and the way it might behave in response to a threat, you can better protect the organism," Furneaux says. In biology, the approach relies on an inside-out approach. In cybersecurity, the method goes down into the lowest building blocks of software — which are like the cells in a body — to protect the entire system. "By understanding the RNA and DNA, we can create the equivalent of a vaccine," Furneaux adds. Other cybersecurity vendors, including Darktrace, Vectra AI, and BlackBerry Cybersecurity, have also developed products that rely to some degree on biological models.


In the Web3 Age, Community-Owned Protocols Will Deliver Value to Users

It's a virtuous cycle, Oshiro told Decrypt. As adoption of 0x increases, the protocol becomes Web3's foundational layer for tokenized value exchange. That, in turn, drives adoption by integrators who build on 0x, generating more economic value for themselves and users—ultimately bringing the trillions of dollars of economic value that the Internet has already created to the users of the next-generation decentralized Internet. ... Building exchange infrastructure on top of rapidly evolving blockchains means the 0x Protocol will need to be constantly tweaked and improved. Since its launch, 0x has been gradually transitioning all decisions over infrastructure upgrades and management of the treasury to its token holders. “The ability to upgrade comes along with an immense amount of power and a ton of downstream externalities,” Warren said. “And so it’s critical that the only ones who can update the infrastructure are the stakeholders and the people who are building businesses on top of it—that is how we’re thinking about this.”


How to Make Cybersecurity Effective and Invisible

CIOs have a balance to strike: Security should be robust, but instead of being complicated or restrictive, it should be elegant and simple. How do CIOs achieve that "invisible" cybersecurity posture? It requires the right teams, superior design, and cutting-edge technology, processes, and automation. Expertise and Design: Putting the Right Talent and Security Architecture to Work for You. Organizations hoping to achieve invisible cybersecurity must first focus on talent and technical expertise. Security can no longer be handled only through awareness, policy, and controls. It must be baked into everything IT does as a fundamental design element. The IT landscape should be assessed for weaknesses, and an action plan should then be put in place to mitigate risk through short-term actions. Long term, organizations need to design a landscape that is more compartmentalized and resilient, by implementing strategies like zero trust and microsegmentation. For this, companies need the right expertise. Given cybersecurity workforce shortages, organizations may need to identify and onboard an IT partner with strong cyber capabilities and offerings.


Secure Code Quickly as You Write It

Most developers aren’t security experts, so tools that are optimized for the needs of the security team are not always efficient for them. A single developer doesn’t need to know every bug in the code; they just need to know the ones that affect the work they’ve been assigned to fix. Too much noise is disruptive and causes developers to avoid using security tools. Developers also need tools that won’t disrupt their work. By the time security specialists find issues downstream, developers have moved on. Asking them to leave the IDE to analyze issues and determine potential fixes results in costly rework and kills productivity. Even teams that recognize the upside of checking their code and open source dependencies for security issues often avoid the security tools they’ve been given because it drags down their productivity rates. What developers need are tools that provide fast, lightweight application security analysis of source code and open source dependencies right from the IDE. Tooling like this enables developers to focus on issues that are relevant to their current work without being burdened by other unrelated issues.


Data Patterns on Edge

Most of the data in the internet space fall into this bucket, the enterprise data set comprises many interdependent services working in a hierarchical nature to extract required data sets which could be personalized or generic in format. The feasibility of moving this data to edge traditionally was limited to supporting static resources or header data set or media files to edge or the CDN, however the base data set was pretty much retrieved from the source DC or the cloud provider. When you look at the User experiences, the optimization surrounds the principles of Critical rendering path and associated improvement in the navigation timelines for web-based experiences and around how much of the view model is offloaded to the app binary in device experiences. In hybrid experiences, the state model is updated periodically from the server push or poll. The use case in discussion is how we can enable data retrieval from the edge for data sets that are personalized.



Quote for the day:

"A leader is the one who climbs the tallest tree, surveys the entire situation and yells wrong jungle." -- Stephen Covey

Daily Tech Digest - February 16, 2022

Metaverse: Making it a universe for all

With a lot of speculation and little clarity about how it will work, technology companies and governments are only starting to invest in the concept. However, these investments and innovations continue to be riddled with the same concerns that various social scientists and philosophers have been asking of the promises made by the internet and social media. Set off to “democratise the good and disrupt the bad”, the internet has actively helped in the creation of international monopolies holding powers more than governments of nations. Although it has brought immense information to our fingertips, gatekeepers of knowledge still continue to profit by encouraging exclusion, our social relationships have taken a back-seat as we become increasingly affected with our identities online, and many vulnerable groups are left behind due to infrastructural inaccessibility to phones, laptops, computers, and the internet. As the world speculates to step in the metaverse in the next decade, these same questions come to the fore.


How to manage software developers without micromanaging

If software developers detest micromanaging, many have a stronger contempt for yearly performance reviews. Developers target real-time performance objectives and aim to improve velocity, code deployment frequency, cycle times, and other key performance indicators. Scrum teams discuss their performance at the end of every sprint, so the feedback from yearly and quarterly performance reviews can seem superfluous or irrelevant. But there’s also the practical reality that organizations require methods to recognize whether agile teams and software developers meet or exceed performance, development, and business objectives. How can managers get what they need without making developers miserable? What follows are seven recommended practices that align with principles in agile, scrum, devops, and the software development lifecycle and that could be applied to reviewing software developers. I don’t write them as SMART goals, but leaders should adopt the relevant ones as such based on the organization’s agile ways of working and business objectives.


Neuralink: Elon Musk's brain implant firm refutes animal abuse claims

The complaint centers on the care provided to test monkeys during and after implant and removal procedures at UC Davis. PCRM alleges that Neuralink and UC Davis staff failed to provide monkeys with adequate veterinary care, used an unapproved substance called BioGlue that killed monkeys in the experiments, and euthanized several monkeys. Details of the monkies' conditions were revealed in documents released by the university after PCRM filed a public records lawsuit in 2021. Neuralink says that during the 2.5 years at UC Davis, its tests were only conducted on cadavers or "terminal procedures", which involved the "humane euthanasia of an anesthetized animal at the completion of the surgery." "The initial work from these procedures allowed us to develop our novel surgical and robot procedures, establishing safer protocols for subsequent survival surgeries," the company says. During survival studies, Neuralink says two animals were euthanized at planned dates and six animals were euthanized at the medical advice of UC Davis veterinary staff.


Creating a more sustainable IT department

Technical debt requires more infrastructure, which results in more and more carbon emissions. In addition, technical debt requires more manual processes. For example, if we have three different ERP systems that are not integrated, it takes more manual effort just to extract the reports for financial reporting, which results in more paperwork. Technical debt accumulation results in much more latency in processes, much more manual effort, much more infrastructure -- and all that adds to our carbon footprint. Because of that we in the technology group also have an eye toward not accumulating net new technical debt. We do that, in part, by looking at how we manage our vendors so we don't end up buying similar products [for different areas of the business] as well as how we introduce new technologies into our ecosystem to avoid duplication and to ensure they can scale across the organization. ... Continued hybrid and flexible working options for our employees also helps us support reduced emissions because employees don't have to commute. We have also implemented our own business systems management platform that facilitates hot-desking in the workplace.


How Dutch hackers are working to make the internet safe

However happy the foundation was with the donation, it did lead to a slight panic. “It was just before the turn of the year, the term ‘wealth tax’ came up, so we hastily set up a fund and found an administration office to handle the annual accounts,” says Van ’t Hof. This accelerated the professionalisation of the foundation. A new structure was set up, with DIVD as the fundamental institute. “Victor Gevers was the chairman of that before, but because we wanted a different structure, that stopped and we had to look for a director. Surprisingly, everyone pointed in my direction. I took on that task,” says Van ’t Hof. Under the flag of the DIVD Institute is the fund that is meant to bundle all subsidies, donations and other money flows. “From that fund, we can finance projects that contribute to a safer internet,” explains the DIVD director. To give shape to the global ambition, a separate foundation was also set up, CSIRT.global, of which Eward Driehuis is in charge. “That foundation will set up departments in other countries so that volunteer hackers there can also help to scan and report,” says Van ’t Hof.


How to Run a Cassandra Operation in Docker

Containers thrive in a world of modern applications that demand faster delivery, better portability and seamless scalability. Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production. The steam behind this growing trend is none other than Docker. Docker is an open source containerization platform that lets developers package applications into containers that include everything they need to run in different environments. For enterprises, however, it can be tricky to manage individual Docker containers at scale, giving way to the popular container orchestration platform: Kubernetes (K8s). In short, Kubernetes makes it easy to deploy, manage and scale containers — and is the dominant orchestration platform used in enterprises today. This makes learning Kubernetes a must for every budding application developer; but first, you need to understand containers and Docker. In this Cassandra Operations in Docker workshop, you’ll become familiar with Docker and learn how to deploy a cloud native application in containers.


MIT Develops New Programming Language for High-Performance Computers

The ATL project combines two of the main research interests of Ragan-Kelley and Chlipala. Ragan-Kelley has long been concerned with the optimization of algorithms in the context of high-performance computing. Chlipala, meanwhile, has focused more on the formal (as in mathematically-based) verification of algorithmic optimizations. This represents their first collaboration. Bernstein and Liu were brought into the enterprise last year, and ATL is the result. It now stands as the first, and so far the only, tensor language with formally verified optimizations. Liu cautions, however, that ATL is still just a prototype — albeit a promising one — that’s been tested on a number of small programs. “One of our main goals, looking ahead, is to improve the scalability of ATL, so that it can be used for the larger programs we see in the real world,” she says. In the past, optimizations of these programs have typically been done by hand, on a much more ad hoc basis, which often involves trial and error, and sometimes a good deal of error. 


Goal-Driven Kanban

Adopting goal-driven Kanban was done in one team. Initially, the team used Scrum. Due to the nature of the business, the team had a significant percentage of tasks that were dependent on other teams and various stakeholders and thus the team continuously was not able to complete them in time. Naturally, this caused frustration, and the team decided to switch to Kanban. This cured the issue but over time, the team members started feeling that they work as a “feature factory”. They were missing challenges. Thus Goal-Driven Kanban was born. After receiving management support and agreement with Product Management, the team chose their first goal. It immediately revealed the need to re-plan other features and tasks since the team had to re-focus on the agreed goal. It required rough estimation of the goal, understanding of the team capacity and further agreements with stakeholders. While working on the goal the team had to tackle various challenges, because the bar was high and the team had to all work together doing overall design and development.


A product manager’s guide to web3

Many PMs develop skills like “communication” and “influence” at larger organizations, or even startups where they need to work closely with founders and rally overworked teams. This makes sense because persuasion and coordination have been core to the web2 PM job. Those skills don’t matter as much here. Web3 PM is more focused on execution and community—like signing a big new protocol partner or getting tons of anon users via Twitter. In web2 I was afraid to tweet much, for professional consequences. Now I’d be untrustworthy if I didn’t tweet a lot. Making a viral meme is more important than writing a good email. That is because getting positive attention in the frenetic world of web3 is more valuable than “alignment.” ... Web3 moves too quickly for pontification; new protocols launch daily and DeFi concepts like bonding curves and OHM forks are being tested in real time, so visions and strategies quickly become outdated. This may change over time as the space matures and product vision becomes more of a competitive advantage.


NIST releases software, IoT, and consumer cybersecurity labeling guidance

The order asked NIST to produce guidance for federal agency staff who have software procurement-related responsibilities and is intended to help federal agency staff know what information to request from software producers regarding their secure software development practices. The new NIST document spells out minimum recommendations for federal agencies to follow as they acquire software or a product containing software. The order also directed NIST to define actions or outcomes for software producers, such as commercial-off-the-shelf (COTS) product vendors, government-off-the-shelf software developers, contractors, and other custom software developers. ... NIST notes that its guidance is limited to federal agency procurement of software, which includes firmware, operating systems, applications, and application services, as well as products containing software. Software developed by federal agencies is out of scope, as is open-source software freely and directly obtained by federal agencies.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree

Daily Tech Digest - February 15, 2022

Cloud storage data residency: How to achieve compliance

Data residency and data sovereignty are increasingly governed by local laws. There is an increasing push towards data sovereignty, in part because of supply chain and security concerns. As Mathieu Gorge, CEO at compliance experts Vigitrust, points out, firms and governments alike are increasingly concerned about geopolitical risk. Firms also need to be aware of data adequacy requirements if they intend to move data across borders. This could come into play if they move between hyperscaler regions and AZs, or change SaaS providers. “There is adequacy between the UK and EU, but you are still relying on clauses in the contract to demonstrate that adequacy,” he cautions. Meanwhile, the challenge of data residency is becoming more complicated as more countries roll out data sovereignty regulations. The EU’s GDPR does not actually include stipulations on data residency, relying instead on data adequacy. The UK’s post-Brexit approach follows that of GDPR. But the growth local of data privacy laws is increasingly linked to more localised, or even nationalistic, views of IT resources, and specific regulations and laws can also set out data residency requirements.


Log4j hearing: 'Open source is not the problem'

“Open source is not the problem,” stated Dr. Trey Herr, director of the Cyber Statecraft Initiative with Atlantic Council think tank during a US Senate Committee on Homeland Security & Government Affairs hearing this week. “Software supply-chain security issues have bedeviled the cyber-policy community for years.” Experts have been predicting a long-term struggle to remedy the Log4j flaw and its impact. Security researchers at Cisco Talos for example stated that Log4j will be widely exploited moving forward, and users should patch affected products and implement mitigation solutions as soon as possible. The popular, Java-logging software is widely used in enterprise and consumer services, websites, and applications as an easy-to-use common utility to support client/server application development. If exploited, the Log4j weakness could let an unauthenticated remote actor take control of an affected server system and gain access to company information or unleash a denial of service attack. The Senate panel called on experts in order to find out about industry responses and ways to prevent future software exposures.


How to create data management policies for unstructured data

Automate as much as you can. A declarative approach is the goal. While there are many options available now using independent data management software to manage policies across storage, many organizations still employ IT managers and spreadsheets to create and track policies. The worst part of this bespoke manual effort is searching for files containing certain attributes and then moving or deleting them. These efforts are inefficient, incomplete, and impede the goals of having policies; it’s painful to maintain them, and IT professionals have too many competing priorities. Plus, this approach limits the potential of using policies to continuously curate and move data to data lakes for strategic AI and ML projects. Instead, look for a solution with an intuitive interface to build and execute on a schedule and that runs in the background without human intervention. Measure outcomes and refine. Any data management policy should be mapped to specific goals, such as cost savings on storage and backups. It should measure those outcomes and let you know their status so that if those goals are not being met, you can change the plans accordingly.


7 Ways to Fail at Microservices

Microservices envy is a problem, because microservices aren’t the sort of thing we should be envying. One of our consultants has a heuristic that if a client keeps talking about Netflix and asking for microservices, he knows the engagement is in trouble. Almost certainly, they’re not moving to microservices for the right reason. If the conversation is a bit deeper, and covers things like coupling and cohesion, then he knows they’re in the right space. The starting ambition for a microservices transformation should never be the microservices themselves. Microservices are the means to achieve a higher-level goal of business agility or resiliency or equivalent. Actually, microservices are not even the only means; they're a means. ... It’s important to ask "do you have microservices, or do you have a monolith spread over hundreds of Git repos?" That, unfortunately, is what we often see. This is a distributed monolith, and it’s a terrible thing. It's hard to reason about. It's more prone to errors than its monolithic equivalent. With a conventional monolith where it's all contained in a single development environment, you get benefits such as compile-time checking and IDE refactoring support.


Demystifying the UK’s National Cyber Strategy 2022

Cyber resilience and digital security overlap different “pillars” of the strategy but share the same goal of enhancing the security posture of the UK, which requires a whole of society outlook. The government’s efforts in taking an active role in the development and adoption of technologies critical to cyber space is applaudable. To remain in sync with the pace of change, there needs to be collaborative and active engagement with experts that have a deep understanding of the threats in cyber space and how to secure the technologies required. The National Cyber Strategy outlines the government’s vision to build on its influence and take on a leading role in promoting technologies and security best practices critical to cyber space globally. It must not wait until the telecommunications industry encounters problems with 5G deployments and organisations are left trying to retrospectively fix their security weaknesses. Organisations must build their networks securely from the start, and effective guidance will be key to supporting this development. 


Why Ransomware Groups Such as BlackCat Are Turning to Rust

BlackCat's migration to Rust, which can run on embedded devices and integrate with other languages, comes as no surprise to Carolyn Crandall, chief security advocate at network security specialist Attivo Networks. She tells ISMG that attackers are always going to innovate with new code that is designed to circumvent endpoint defense systems. Crandall says BlackCat ransomware is "extremely sophisticated" because it is human-operated and command line-driven. ... Anandeshwar Unnikrishnan, senior threat researcher at cybersecurity firm CloudSEK, tells ISMG that threat actors, especially malware developers, will eventually move away from traditional programing languages they formerly used to write malware, such as C or C++, and adopt newer languages, such as Rust, Go and Nim. Unnikrishnan says there are plenty of reasons for malware developers to migrate to languages such as Rust, Go and Nim. But the main reasons are because these newer languages are fast and can evade static analysis of most malware detection systems.


How healthy boundaries build trust in the workplace

Boundaries are the mental, emotional, and physical limits people maintain with respect to others and their environment, and psychologists consider them healthy if they ensure an individual’s continued well-being and stability. They serve many valuable functions. They help protect us, clarify our own responsibilities and those of others, and preserve our physical and emotional energy. They help us stay focused on ourselves, honor our values and standards, and identify our personal limits. Physical workplace boundaries may include delineating an individual’s personal space in a shared office or limiting body contact to handshakes rather than hugs. Mental boundaries reflect individuals’ important beliefs, values, and opinions. At work, that may mean not participating in activities that conflict with a person’s religious convictions, like betting pools, or personal choices, such as not drinking alcohol at office events. Emotional boundaries relate to people’s feelings being acknowledged and respected and may manifest as individuals not discussing their personal lives with coworkers.


Edge computing: 3 ways you can use it now

Edge infrastructure is what enables a “smart” factory floor, for example, armed with sensors and other connected devices that generate endless streams of data. “The manufacturing and warehousing sectors have been early adopters, with use cases like preventive maintenance and augmented reality/virtual reality (AR/VR) remote assistance applications powered by on-prem edge compute,” Mishra says. “Warehouse automation through robotics, location-based solutions, and supply chain optimization are also viewed as key use cases for edge.” A specific technology to watch for here is computer vision: the artificial intelligence (AI) discipline focused on computer-based recognition of images and/or video. “Manufacturing is doing really interesting work in the smart factory floor with quality control using computer vision to identify a slip in production quality before it becomes detectable to humans,” says Paul Legato, VP of platform engineering at Wallaroo. Experts expect that computer vision applications, powered by edge infrastructure, will be a hotbed of new use cases going forward.


Five lessons for building your B2B e-commerce audience

You need to grow and tend to relationships with your target audience, but those relationships will only be as good as the technology you deploy. Your technology is your connection. I’ve seen too many organisations succumb to the fear that digital platforms will take all the flavor out of their brand. But if you choose the right solution, you’re going to have more interaction, more connection, and more opportunities to convey your brand. E-commerce soars when it’s part of a high-quality omnichannel solution designed with B2B complexities in mind. Still not sure if tech is the answer? Private equity firms — key players in the B2B ecosystem —tend to keep their finger on the pulse of future-friendly concepts. You can sense which way the wind is blowing by the new talent they bring in. ... It might seem counterintuitive, but digital drives more human connection. One of today’s most compelling paradoxes is that while markets are more complex, and the buyer’s journey has a thousand detours — I’ll get to that point in a moment — there’s a clear imperative in that complexity and journey. 


Evolving a data integration strategy

In addition to a lack of sufficient data governance, poorly integrated data leads to poor customer service. “In the digital economy, the customer expects you to know and have ready insight into every transaction and interaction they have had with the organisation,” says Tibco CIO Rani Johnson. “If a portion of a customer’s experience is locked in a silo, then the customer suffers a poor experience and is likely to churn to another provider.” Breaking down such silos of data requires business change. “Building end-to-end data management requires organisational changes,” says Nicolas Forgues, former chief technology officer (CTO) at Carrefour, who is now CTO at consulting firm Veltys. “You need to train both internal and external staff to fulfil the data mission for the company.” Businesses risk missing the bigger picture, in terms of spotting trends or identifying indicators of changes, if they lack a business-wide approach to data management and a strategy for integrating silos. In Johnson’s experience, one of the reasons for poor visibility of data is that business functions and enterprise applications are often decentralised. 



Quote for the day:

"Problem-solving leaders have one thing in common: a faith that there's always a better way." -- Gerald M. Weinberg

Daily Tech Digest - February 14, 2022

The DevOps Journey: Continuous Mindset Starts With Cultural Change

Continuous mindset is intertwined throughout all three of the above approaches. To reach the maximum value in each area, a continuous mindset needs to be enabled, matured and applied to disciplines in other DevOps areas. The changes are across people, processes, technology and culture. Like DevOps, gaining a continuous mindset is a journey that will take time, many steps and constant learning. Continuous mindset changes the way people achieve outcomes for your business. This requires an emphasis on changing the way people think at every level of the organization, the way people are connected and the way they work. Achieving true continuous mindset ROI requires significant cultural adoption and collaboration. A key component is leveraging strong organizational change management (OCM). ... Ultimately, a continuous mindset changes the organizational engagement model and requires organizations to align on how and when they will work together. This requires shifting responsibilities and empowering teams/individuals to contribute to the improvement of DevOps capabilities. 


What does a real economist think of cryptocurrencies?

It’s a computing question, but obviously intersects with economic issues and the core question is how will all this stuff pay for itself, which is still unclear. Economists do or at least could have a lot to say about that. I wouldn’t say the economists are worthless. I would say they haven’t been valuable yet. They may come to the party very late. Cryptocurrency is not fundamentally new monies, but you will find people in the sector who still will argue these things will serve as literal currencies. I think the name cryptocurrency has become unfortunate but clearly they are currencies can be used for some purposes sometimes black or gray market purposes, but I don’t think fundamentally that’s what they are. You see this with NFTs which are their own thing and they’re closely related to crypto. But like, how does that relate to a currency? Is an NFT an artwork? But within the language of cryptography it all makes more sense. It’s more of a unified development and that’s a way to think about it rather than thinking of them as currencies.


Hybrid working in the metaverse

Employees would be represented by avatars that used non-fungible tokens (NFTs) and cryptocurrencies to buy goods and services, and accessed applications, such as Slack or Dropbox, within this virtual space in order to communicate and collaborate. Operating as a platform-as-a-service offering, this virtual office would be based on technology ranging from augmented (AR) and virtual reality (VR) to digital twins and would integrate with third-party tools and applications to create a truly immersive environment. But as to whether such a concept is likely to take off any time soon, Iain Fisher, director of the Northern European digital strategy and solutions practice at research and advisory firm ISG, is not convinced – although he does believe it could have a role to play in certain, predominantly creative, industries. For instance, he sees computer gaming being a “huge” market, while the immersive nature of the technology means it could appeal to retailers, entertainment providers and advertisers keen to offer new customer experiences.


The Long Road to Quantum: Are We There Yet?

In the past few years, we have witnessed that access to quantum computing hardware has catalyzed an entire ecosystem involving algorithms, middleware, firmware, control system. Indeed, there's an entire supply chain of quantum computing-relevant hardware and services. It includes Zapata, QCWare, Riverlane, Q-Control, and many others. The same will happen with the larger quantum ecosystem when access to other categories of quantum hardware systems are made available over the cloud: quantum simulators, quantum emulators, analog quantum machines, and programmable but targeted purpose quantum systems. Consider quantum sensing, quantum signal processing, quantum analog machine learning, communications. The list goes on. The barrier to entry to any one of those applications is enormous. Speaking specifically to cold atom-based quantum technology, which is what I know, it takes something like three or four Ph.D. physicists, two or more years, and $2M or so to establish a credible effort that involves hardware. Now, suppose the barrier to creating the hardware is removed; hardware expertise, the development time, and the capital costs of hardware now go away.


How the metaverse could shape cybersecurity in 2022

Ever since the idea of the Metaverse hit the news, a flurry of cybercriminal activity has been evident through rising NFT scams. Since these scams deploy social engineering tactics, it’s safe to say that social engineering attacks are not going away any time soon. In fact, there will likely be a rise in attacks as the metaverse continues to take shape. The fact that the Metaverse is so far going to house an extensive collection of sensitive data, there has to be a probable rise in hack attacks. Along with that is the evident impact it has on data privacy. If things remain vulnerable, there could be frequent hacks and data theft, harming all users. With that comes the imminent threat of scams and malware invasions. However, what is probably most deeply concerning is that the metaverse is built through blockchain technology. While this technology is secure, it is not immune to vulnerabilities altogether. Moreover, it is decentralized, with no designated admin or moderator to keep charge or control. With such an absence of authority, there will be no possible way to retrieve stolen or illegally obtained assets. Since the Metaverse will operate through avatars, there will be no concrete method to identify cybercriminals.


Why microservices need event-driven architecture

The ability for data to transit wide area networks, which are often unstable and unpredictable, can be tricky and time consuming. Add to that the challenges being created by the Internet of Things, big data, mobile devices. This creates significant risks to microservices initiatives. Older systems are not quick and easy to update, but on the flip side microservices need to be swift and flexible. Older implementations rely on aging communication protocols, while microservices rely on APIs and open protocols. Most legacy systems will be deployed on premise, while most microservices live in the cloud. Newer systems such as IoT networks use highly specialized protocols, but most microservices APIs and frameworks do not support them as standard. Event-driven architecture addresses these mismatches between legacy systems and microservices. ... EDA takes data from being static to fluid. For example, stuck, at rest in a database locked underneath an API to being fully in motion - consumable as business-critical events happen in real-time. RESTful Microservices alone are not enough.


Performance Vs. Scalability

Sometimes architects divide systems into smaller parts. They may separate a data store into two physically different parts. This means that one part of the system can scale up or down separately from the other parts of the system. This can be useful when one part of the system receives more traffic than other parts. For example, the menu part of the system may receive thousands of requests per second, while the ordering part may only receive a few transactions per second. If the menu part of the system maxes out resources, it may slow down ordering even though ordering is not doing much work. Independent scaling would allow you to throw resources at the menu so that ordering doesn’t suffer performance degradation. On its face, this seems like a nice-to-have feature of the architecture. However, this may massively increase the system’s complexity, and it may actually increase costs. Database instances are often expensive, and you often pay a flat monthly fee whether you use them heavily or not. Moreover, the busier part of the system should only affect the performance of the less active part if the active part maxes out resources.


Inside the blockchain developers’ mind: Can EOS deliver a killer social DApp?

On Ethereum, users have just addressed similar to Bitcoin addresses, which are a long string of numbers and letters that are free to create because they don’t take up any network storage. This is critical because anything that takes up network storage or uses some of the network’s computational resources has a real-world cost that must be paid by someone. Steem wanted to be a social blockchain and so, the theory went. They needed a centralized account that would be easy to remember which they would use to manage their frequent interactions. So, it made perfect sense for these accounts to have human-readable names that were easy to remember, but that also meant that they took up network storage. But, this centralized account also makes you a target. If you have a single private key that you regularly use to access an account and that account holds valuable tokens, then hackers are going to do their best to gain access to your computer so that they can steal your money and anything else of value you might have on there.


No more transistors: The end of Moore’s law

The problem with Moore’s Law in 2022 is that the size of a transistor is now so small that there just isn’t much more we can do to make them smaller. The transistor gate, the part of the transistor through which electrons flow as electric current, is now approaching a width of just 2 nanometers, according to the Taiwan Semiconductor Manufacturing Company’s production roadmap for 2024. A silicon atom is 0.2 nanometers wide, which puts the gate length of 2 nanometers at roughly 10 silicon atoms across. At these scales, controlling the flow of electrons becomes increasingly more difficult as all kinds of quantum effects play themselves out within the transistor itself. With larger transistors, a deformation of the crystal on the scale of atoms doesn’t affect the overall flow of current, but when you only have about 10 atoms distance to work with, any changes in the underlying atomic structure are going to affect this current through the transistor. Ultimately, the transistor is approaching the point where it is simply as small as we can ever make it and have it still function. The way we’ve been building and improving silicon chips is coming to its final iteration.


Embracing Agile Values as a Tech and People Lead

A key agile principle to me is “Embrace Change”, the subtitle of XP Xplained by Kent Beck. Change is continuous in our world and also at work. Accepting this fact makes it easier to let go of a decision that was taken once under different circumstances, and find a new solution. To change something is also easier if there is already momentum from another change. So I like to understand where the momentum is and then facilitate its flow. We had a large organizational change at the beginning of 2020. Some teams were newly created and everyone at MOIA was allowed to self-select into one of around 15 teams. That was very exciting. Some team formations went really well. Others didn’t. There were two frontend developers who had self-selected into a team that had less frontend work to do than expected. These two tried to make it work by taking over more responsibility in other fields, thus supporting their team, but after a year they were frustrated and felt stuck. Recognizing the right moment that they needed support from the outside to change their team assignment was very important.



Quote for the day:

"Successful leadership requires positive self-regard fused with optimism about a desired outcome." -- Warren Bennis

Daily Tech Digest - February 13, 2022

Software is eating the world–and supercharging global inequality

There’s no denying that the world is rapidly changing, with innovations such as artificial intelligence, robotics, blockchain, and the cloud. Each of the previous three industrial revolutions, including the most recent digital revolution, led to economic growth and helped eliminate mass poverty in many countries. However, these moments also concentrated wealth in the hands of those that control new technologies. VCs will play an increasing role in determining what tech factors into our daily lives over the next ten years and we must ensure that technology is used to modernize antiquated industries and can create a better standard of living worldwide. Ensuring that this new wave of technology benefits as many as possible is the challenge of our generation, especially considering that the pending climate crisis will disproportionately impact lower-income and marginalized communities. Bitcoin farms do not benefit maize farmers in Lagos who face deadly floods but VCs' obsession with crypto generates outsized investment and the wealthy get wealthier.


How open source is unlocking climate-related data's value

One of the barriers to assessing the cost of risks and opportunities in climate-change research is the lack of reliable and readily accessible data about climate. This data gap prevents financial sector stakeholders and others from assessing the financial stability of mitigation and resilience efforts and channeling global capital flows towards them. It also forces businesses to engage in costly, improvised ingestion and curation efforts without the benefit of shared data or open protocols. To address these problems, the Open Source Climate (OS-Climate) initiative is building an open data science platform that supports complex data ingestion, processing, and quality management requirements. It takes advantage of the latest advances in open source data platform tools and machine learning and the development of scenario-based predictive analytics by OS-Climate community members. To build a data platform that is open, auditable, and supports durable and repeatable deployments, the OS-Climate initiative leverages the Operate First program. 


Multicloud Strategy: How to Get Started?

Managing security in the cloud is a daunting task, especially for a multicloud. For this reason, the recommended approach is to have Cloud Security Control Framework early in any cloud migration strategy. But what does this mean? Today the best practice is to change the security mentality from perimetral security to a more holistic approach, which considers cybersecurity risks from the very design of the multicloud deployment. It starts by allowing the DevSecOps team to build/automate modular guardrails around the infrastructure and application code right from the beginning. And you can consider these guardrails as cross-cloud security controls based on the current trend of implementing Zero Trust networking architecture. Under this new paradigm, all users and services are “mistrusted” even within the security perimeter. This approach requires a rethinking of access controls since the workloads may get distributed and deployed across different cloud providers. Implementing security controls at all levels is a key, from infrastructure to application code, services, networks, data, users’ access, etc.


The future of enterprise: digging into data governance

Enterprise technology is always moving forward, and so, as more businesses move to a cloud-focused strategy, the boundaries of what that means are evolving. New models such as serverless and multi-cloud are redefining the ways in which companies will need to manage the flow and ownership of their data, and they’ll require new ways of thinking about how data is governed. According to Syed, these new models are going to make even more important the ability to decentralize data architecture while maintaining centralized governance policies. “A lot of companies are going to invest in trying to figure out, ‘How do I build something that combines not just my one data source, but my data warehouse, my data lake, my low-latency data store and pretty much any data object I have?’ How do you bring it all together under one umbrella? The tooling has to be very configurable and flexible to meet all the different lines of businesses’ unique requirements, but also ensure all the central policies are being enforced while you are producing and consuming the data.” 


Cloud security training is pivotal as demand for cloud services explode

Organizations can be caught out by thinking that they can lift-and-shift their existing applications, services and data to the cloud, where they will be secure by default. The reality is that migrating workloads to the cloud requires significant planning and due diligence, and the addition of cloud management expertise to their workforce. Workloads in the cloud rely on a shared responsibility model, with the cloud provider assuming responsibility for the fabric of the cloud, and the customer assuming responsibility for the servers, services, applications and data within (assuming an IaaS model). However, these boundaries can seem somewhat fuzzy, especially as there isn’t a uniform shared responsibility model across cloud providers, which can result in misunderstandings for companies that use multi-cloud environments. With so much invested in cloud infrastructure – and with a general lack of awareness of cloud security issues and responsibilities, as well as a lack of skills to manage and secure these environments – there is much to be done.


How the 12 principles in the Agile Manifesto work in real life

Leaders who work with agile teams focus on ensuring that the teams have the support (tools, access, resources) and environment (culture, people, external processes) they need, and then trust them to get the job done. This principle can scare some leaders who have a more command-and-control management style. They wonder how they'll know if their team is succeeding and focusing on the right things. My response to these concerns is to focus on the team’s outcomes. Are they delivering working product frequently? Are they making progress towards their goals? Those are the metrics that warrant attention. It is a necessary shift in perspective and mindset, and it is one that leaders as well as agile teams need to make to achieve the best results. To learn more about how to support agile teams, leaders should consider attending the Professional Agile Leadership - Essential class. Successful agile leaders enable teams to deliver value by providing them with the tools that they need to be successful, providing guidance when needed, embracing servant leadership and focusing on outcomes.


How to Pick the Right Automation Project

With the beginning and end states clearly articulated, you can then specify a step-by-step journey, with projects sequenced according to which ones can do the most in early days to lay essential foundations for later initiatives. Here’s an example to illustrate how this approach can lead to better choices. At a construction equipment manufacturer, there are three tempting areas to automate. One is the solution a vendor is offering: a chatbot tool that can be fairly simply implemented in the internal IT help desk with immediate impact on wait times and headcount. A second possibility is in finance, where sales forecasting could be enhanced by predictive modeling boosted by AI pattern recognition. The third idea is a big one: if the company could use intelligent automation to create a “connected equipment” environment on customer job sites, its business model could shift to new revenue streams from digital services such as monitoring and controlling machinery remotely. If you’re going for a relatively easy implementation and fast ROI, the first option is a no-brainer. If instead you’re looking for big publicity for your organization’s bold new vision, the third one’s the ticket.


Hybrid work and the Great Resignation lead to cybersecurity concerns

As the fallout of the Great Resignation is still being felt by many enterprises, there are four main concerns raised by Code42’s report. As 4.5 million employees left their jobs in November 2021 alone, this has created the first big challenge for industry leaders in protecting their data. Many employees leaving their roles have accidentally or intentionally taken data with them to competitors within the same industry, or even sometimes leveraged their former employers’ data for ransom. Business leaders are concerned with the types of data that are leaving, according to 49% of respondents, and 52% said they are concerned with what information is being saved on local machines and personal hard drives. Additionally, business leaders are more concerned with the content of the data that is exposed rather than how the data is exposed. Another major concern comes in the form of a disconnect when it comes to the problem of employees leaving in droves, creating uncertainty about ownership of data. Cybersecurity practitioners want more say in setting their company’s security policies and priorities to the company since they are dealing with the risks their employers face. 


Interoperability must be a priority for building Web3 in 2022

In addition to being time-consuming to build, once-off bridges are often highly centralized, acting as intermediaries between protocols. Built, owned, and operated by a single entity, these bridges become bottlenecks between different ecosystems. The controlling entity decides which tokens to support and which new networks to connect. ... Another impact of the siloed nature of the blockchain space is that developers are forced to choose between blockchain protocols, and end up building dapps that can be used on only one network, but not the others. This cuts the potential user base of any solution down significantly and prevents dapps from reaching mass adoption. Developers then have to spend resources deploying their apps across multiple networks which for many means to fragment their liquidity across their network-specific applications. From these struggles and drains on time and money, we know that a more universal solution for interoperability is the only way forward. Our industry, perhaps the most innovative in the world today and packed with the most talented minds, must prioritize the principles of universality, decentralization, security, and accessibility when it comes to interoperability. 


Better Data Modeling with Lean Methodology

Lean is a methodology for organizational management based on Toyota’s 1930 manufacturing model, which has been adapted for knowledge work. Where Agile was developed specifically for software development, Lean was developed for organizations, and focuses on continuous small improvement, combined with a sound management process in order to minimize waste and maximize value. Quality standards are maintained through collaborative work and repeatable processes. ... Eliminate anything not adding value as well as anything blocking the ability to deliver results quickly. At the same time, empower everyone in the process to take responsibility for quality. Automate processes wherever possible, especially those prone to human error, and get constant test-driven feedback throughout development. Improvement is only possible through learning, which requires proper documentation of the iterative process so knowledge is not lost. All aspects of communication, the way conflicts are handled, and the onboarding of team members should always occur within a culture of respect.



Quote for the day:

"Take time to deliberate; but when the time for action arrives, stop thinking and go in." -- Andrew Jackson