Daily Tech Digest - February 06, 2022

Technical Debt and Modular Software Architecture

An organization incurs technical debt whenever it cedes its rights and perquisites as a customer to a cloud service provider. To get a feel for how this works in practice, consider the case of a hypothetical SaaS cloud subscriber. The subscriber incurs technical debt when it customizes the software or redesigns its core IT and business processes to take advantage of features or functions that are specific to the cloud provider’s platform (for example, Salesforce’s, Marketo’s, Oracle’s, etc.). This is fine for people who work in sales and marketing, as well as for analysts who focus on sales and marketing. But what about everybody else? Can the organization make its SaaS data available to high-level decision-makers and to the other interested consumers dispersed across its core business function areas? Can it contextualize this SaaS data with data generated across its core function areas? Is the organization taking steps to preserve historical SaaS data. In short: What is the opportunity cost of the SaaS model and its convenience? What steps must the organization take to offset this opportunity cost?


Data Breaches Affected Nearly 6 Billion Accounts in 2021

Breaches grew rapidly in 2021, noted Lucas Budman, founder and CEO of TruU, a multifactor authentication company in Palo Alto, Calif. “We exceeded the number of breach events in 2020 by the third quarter of 2021,” he told TechNewsWorld. A number of factors have been contributing to that increase, he added. “The ever-increasing sophistication of threat actors, a greater number of connected IoT devices, and the protracted shortage of skilled security talent all play a role in increased breach activity,” he said. Budman also maintained that Covid-19 has contributed to growing data breach numbers. “Data shows that the surge in remote and hybrid work and other factors resulting from the Covid-19 pandemic have fueled the rise of cybercrime by 600 percent or more,” he said. ... “Since an exceedingly large percentage of attacks focus on the end-user, this move to remote has proven very fruitful for attackers,” he told TechNewsWorld. “Similarly,” he continued, “the pandemic has dramatically changed the way goods and services are manufactured, dispatched and consumed. ...”


Council Post: How to develop a comprehensive AI governance & ethics function

Biases including cognitive bias, incomplete data, flaws in the algorithm, etc, slow down the growth of AI in an organisation. Research and development play an important role in addressing these issues. Who understands this better than ethicists, social scientists, and experts? Therefore, businesses should include such experts in their AI projects across applications. Data architects also play a key role in governing AI products. Companies should have a complete pipeline of data or metadata for AI modelling. Remember, AI’s success depends on a well-sorted data architecture that is error and noise-free. To do so, data standardisation, data governance, and business analytics are a must. HR plays a key role in shaping the AI governance function. For instance, they should find candidates who “fit” into the organisation’s existing AI framework and create training material for the existing workforce to help them understand how to create ethical AI applications. Ensuring AI products don’t cross any legal boundaries is critical for smooth deployment. AI solutions meet the stipulated compliance guidelines of the organisation and the industry in which the organisation operates.


Importance of Binding Business and System Architecture

An architecture of the enterprise is a carefully designed structure of a business or company entrepreneurial economic activity. One can easily assert that these entrepreneurial or economic activities include people, processes, and systems working in harmony to yield important business outcomes. These structures include organizational design, operational processes producing value, and the systems used by people during the execution of their mission. Enterprise architects use the business-prescribed operational end-state (results of value) to guide (like a blueprint) the enterprise to accomplish its mission—frequently, the end-states include vision, goals, objectives, and capabilities. Can a business exchange goods and services without technology and survive? Of course not. ... The enterprise architecture is neither the business architecture (operational viewpoint) nor the system architecture (technical viewpoint)—rather, the enterprise architecture is both architectures created in an integrated form, using a standardized method of design, and usable and consumable by both operational and technology people.


Cloud Native Winners and Losers

Enterprises that don’t plan ahead to move an application off a specific cloud but are forced to do so at some future point will also become losers. There is a lot of cost and risk involved in modifying applications to remove specific cloud native services and replace them with other cloud native services or open services. Clearly, this is the dreaded “vendor lock in.” Most applications that move to cloud platforms won’t ever move off that platform during the life of the application, mostly due to the costs and risks involved. Another drawback is that you’ll need cloud specific skills to take full advantage of cloud native features. This talent may not be available in-house or in the general labor pool, and/or it could drive staffing costs over the budget. The pandemic drove a massive rush to public cloud providers, which meant the demand for cloud migration skills exploded as well, driving up salaries and consulting fees. Moreover, the scarcity of qualified skills increases the risk that you won’t find the skills needed for cloud native systems builds, and/or the required level of talent will be unavailable to create optimized and efficient systems.


Leveraging small data for insights in a privacy-concerned world

While big data focuses on the huge volumes of information that individuals and consumers produce for businesses to look at and AI programs to sift through, small data is made up of far more accessible bite-sized chunks of information that humans can interpret to gain actionable insights. While big data can be a hindrance to small businesses due to its unstructured nature, masses of required storage space, and oftentimes the necessity of being held in SQL servers, small data holds plenty of appeal in that it can arrive ready to sort with no need for merging tables. It can also be stored on a local PC or database for ease of access. However, as it is generally stored within a company, it’s essential that businesses utilize the appropriate levels of cybersecurity to protect the privacy of their customers and to keep their confidential data safe. Maxim Manturov, head of investment research at Freedom Finance Europe has identified Palo Alto as a leading firm for businesses looking to protect their small data centrally. “Its security ecosystem includes the Prisma cloud security platform and the Cortex artificial intelligence AI-based threat detection platform,” Manturov notes.


EvilModel: Malware that Hides Undetected Inside Deep Learning Models

While this scenario is alarming enough, the team points out that attackers can also choose to publish an infected neural network on online public repositories like GitHub, where it can be downloaded on a larger scale. In addition, attackers can also deploy a more sophisticated form of delivery through what is known as a supply chain attack, or value-chain or third-party attack. This method involves having the malware-embedded models posing as automatic updates, which are then downloaded and installed onto target devices. ... The team notes, however, that it is possible to destroy the embedded malware by retraining and fine-tuning models after they are downloaded, as long as the infected neural network layers are not “frozen”, meaning that the parameters in these frozen layers are not updated during the next round of fine-tuning, leaving the embedded malware intact. “For professionals, the parameters of neurons can be changed through fine-tuning, pruning, model compression or other operations, thereby breaking the malware structure and preventing the malware from recovering normally,” said the team.


Moore’s Not Enough: ​4 New Laws of Computing

Law 1. Yule’s Law of Complementarity - From a strategic point of view, technology firms ultimately need to know which complementary element of their product to sell at a low price—and which complement to sell at a higher price. And, as the economist Bharat Anand points out in his celebrated 2016 book The Content Trap, proprietary complements tend to be more profitable than nonproprietary ones. ... Law 3. Evans’s Law of Modularity - Evans’s Law could be formulated as follows: The inflexibilities, incompatibilities, and rigidities of complex and/or monolithically structured technologies could be simplified by the modularization of the technology structures (and processes). ... In other words, modularization of software projects and the development process makes such endeavors more efficient. As outlined in a helpful 2016 Harvard Business Review article, the preconditions for an agile methodology are as follows: The problem to be solved is complex; the solutions are initially unknown, with product requirements evolving; the work can be modularized; and close collaboration with end users is feasible.


Cisco announces Wi-Fi 6E, private 5G to assist with hybrid work

The new Cisco Wi-Fi 6E products are the first high-end 6E access points that will assist businesses and their workers with high-traffic hybrid setups. The Wi-Fi 6E will also open up a new spectrum in the form of the 6GHz band. The access points will use the newly available spectrum that matches wired speeds and gigabit performance. Wi-Fi 6E will also greatly expand capacity and performance for the latest devices using collaborative applications designed for hybrid work and coupled with Cisco’s DNA Center and DNA Spaces. With these upgrades, the company is promoting collaboration in the offices, campuses and branches by delivering Internet of Things and operational technology benefits in smart buildings at scale. Also announced were three new varieties of the expansion of the Catalyst 9000x line of switches in the forms of the 9300x, the 9400x and the 9500x/9600x to help support the traffic brought in from increased wireless capacity. The 9300x will sport 48 Universal Power Over Ethernet (UPOE) ports for small to large campus access and aggregation deployments with Multigigabit Ethernet (mGig) speeds and 100G Uplink Modules in a stackable switching platform.


Supercomputers, AI and the metaverse: here’s what you need to know

Meta has promised a host of revolutionary uses of its supercomputer, from ultrafast gaming to instant and seamless translation of mind-bendingly large quantities of text, images and videos at once — think about a group of people simultaneously speaking different languages, and being able to communicate seamlessly. It could also be used to scan huge quantities of images or videos for harmful content, or identify one face within a huge crowd of people. The computer will also be key in developing next-generation AI models, it will power the Metaverse, and it will be a foundation upon which future metaverse technologies can rely. But the implications of all this power mean that there are serious ethical considerations for the use of Meta’s supercomputer, and for supercomputers more generally. ... The age of AI also brings with it key questions about human privacy and the privacy of our thoughts. To address these concerns, we must seriously examine our interaction with AI. When we look at the ethical structures of AI, we must ensure its usage is transparent, explainable, bias-free, and accountable.



Quote for the day:

"Strong leaders encourage you to do things for your own benefit, not just theirs." -- Tim Tebow

Daily Tech Digest - February 05, 2022

Quantum Computing: Researchers Achieve 100 Million Quantum Operations

Quantum computing systems are notoriously difficult to maintain in coherent states. The fragile nature of the "ordered chaos" is such that qubit information and qubit connection (entanglement) usually deteriorates in scales much lower than a second. The new research brings quantum computing coherency to human-perceivable scales of time. Using a technique they've termed "single shot readout," the researchers used precise laser pulses to add single electrons to qubits. ... While it may not sound like much, time flows differently in computing; going from stable quantum states in the order of fractions of a second up to five seconds increases the amount of useful computing time extracted from the available qubits. Moreover, it opens up new ways of increasing processing power beyond pure qubit count - the researchers calculate that they can perform around 100 million quantum operations in that five-second slice. So perhaps quantum computing will be a threat to Bitcoin and the current government, commercial and personal encryption schemes much earlier than expected?


Meta to bring in mandatory distances between virtual reality avatars

Meta announced on Friday that it is introducing personal boundaries on two VR apps: Horizon Worlds, where people can meet fellow VR users and design their own world; and Horizon Venues, which hosts VR events such as comedy shows or music gigs. The company said the distance between people will be the VR equivalent of four feet. “A personal boundary prevents anyone from invading your avatar’s personal space. If someone tries to enter your personal boundary, the system will halt their forward movement as they reach the boundary,” said the company. Meta is introducing the 4ft boundary as a default setting and will consider further changes such as letting people set their own boundaries. “We think this will help to set behavioural norms – and that’s important for a relatively new medium like VR,” said Meta. The UK data watchdog has also said it is seeking clarification from Meta about parental controls on the company’s popular Oculus Quest 2 VR headset, as campaigners warned that it could breach an online children’s safety code. 


Platform Engineering Challenge: Security vs. Dev Experience

There are a few things that you can do to make things easier for developers. First, ensure that all developers affected by the new policies are aware of them. That lack of knowledge is a common reason for mistakes. Once the developer team knows the security policies, they can work with those policies in mind. Next, remember that mistakes can happen. To mitigate this, automate as many items as possible with a proper continuous integration (CI) pipeline. In the CSP example, it is possible to crawl your application automatically and report CSP violations with a little bit of initial setup. Automating the verification step can drastically reduce the possibility of mistakes. This doesn’t just apply to CSP; it is relevant for any check you want to implement to ensure that your devs follow particular guidelines or standards. Another potential inter-team headache is vulnerabilities in third-party packages. Usually, the dev teams will install new packages. Depending on your business structure, though, it might fall to the platform or security teams to fix any vulnerabilities found. 


Satya Nadella: Microsoft has “permission to build the next Internet”

[In] the Microsoft I grew up in, I always think about three things—and we added a fourth. The three things that we always had are: we built tools for people to write software; we built tools for people to drive their personal and organizational productivity; and we built games. That’s the three things that Microsoft has done from time immemorial. The first game, I think, was built before Windows was there. existed on DOS. And so, to me, gaming, coding, productivity, or knowledge worker tools are at the core. The thing that we added pretty successfully—that most people thought we would never be able to do—is become an enterprise company... actually really build enterprise infrastructure... and business applications. And guess what? We now do that as well. ... Take what’s happening with the metaverse. What is the metaverse? Metaverse is essentially about creating games. It is about being able to put people, places, things [in] a physics engine and then having all the people, places, things in the physics engine relate to each other.


Council Post: Role of AI in creating inclusive products & solutions

Artificial Intelligence has the potential to create a more inclusive society. Let’s consider two dimensions: language and disability. Language is often the greatest barrier towards access to information, and hence, opportunities. Today language translation using AI is removing that barrier. For example, Microsoft’s Azure AI now empowers organizations to translate between 100 languages and dialects globally, making information in text and documents accessible to more than 5.6 billion people worldwide3. These include not only the world’s most spoken languages like English, Chinese, Hindi, Arabic and Spanish, but also dialects that are native or preferred by a smaller population. There are close to 7,000 languages spoken around the world, but sadly, every two weeks a language dies with its last speaker. Recent advances in AI have enabled inclusion of low resource, and often endangered, languages and dialects such as Tibetan, Assamese and Inuktitut. A multilingual AI model called Z-code combines several languages from a language family such as Hindi, Marathi, and Gujarati.


Mozilla is shutting down its VR web browser, Firefox Reality

A top VR web browser is closing down. Today, Mozilla announced it’s shutting down its Firefox Reality browser — the four-year-old browser built for use in virtual reality environments. The technology had allowed users to access the web from within their VR headset, doing things like visiting URLs, performing searches and browsing both the 2D and 3D internet using your VR hand controllers, instead of a mouse. Firefox Reality first launched in fall 2018 and has been available on Viveport, Oculus, Pico and HoloLens platforms through their various app stores. While capable of surfing the 2D web, the expectation was that users would largely use the new technology to browse and interact with the web’s 3D content, like 360-degree panoramic images and videos, 3D models and WebVR games, for example. But in an announcement published today, Mozilla says the browser will be removed from the stores where it’s been available for download in the “coming weeks.” Mozilla is instead directing users who still want to utilize a web browser in VR to Igalia’s upcoming open source browser, Wolvic, which is based on Firefox Reality’s source code.


How SQL can unify access to APIs

Software construction requires developers to compose solutions using a growing proliferation of APIs. Often there’s a library to wrap each API in your programming language of choice, so you’re spared the effort of making raw REST calls and parsing the results. But each wrapper has its own way of representing results, so when composing a multi-API solution you have to normalize those representations. Since combining results happens in a language-specific way, your solution is tied to that language. And if that language is JavaScript or Python or Java or C# then it is arguably not the most universal and powerful way to query (or update) a database. ... Steampipe is an open-source tool that fetches data from diverse APIs and uses it to populate tables in a database. The database is Postgres, which is, nowadays, a platform on which to build all kinds of database-like systems by creating extensions that deeply customize the core. One class of Postgres extension, the foreign data wrapper (FDW), creates tables from external data. Steampipe embeds an instance of Postgres that loads an API-oriented foreign data wrapper.


The wrong data privacy strategy could cost you billions

Regulators have long understood that de-identification is not a silver bullet due to re-identification with side information. When regulators defined anonymous or de-identified information, they refrained from giving a precise definition and deliberately opted for a practical one based on the reasonable risks of someone being re-identified. GDPR mentions “all the means reasonably likely to be used” whereas CCPA defines de-identified to be “information that cannot reasonably identify” an individual. The ambiguity of both definitions leaves places the burden of privacy risk assessment onto the compliance team. For each supposedly de-identified dataset, they need to prove that the re-identification risk is not reasonable. To meet those standards and keep up with proliferating data sharing, organizations have had to beef up their compliance teams. This appears to have been the process that Netflix followed when they launched a million-dollar prize to improve its movie recommendation engine in 2006. They publicly released a stripped-down version of their dataset with 500,000 movie reviews, enabling anyone in the world to develop and test prediction engines that could beat theirs.


What the Rise in Cyber-Recon Means for Your Security Strategy

Enterprises need to be aware that an increase in new cybercriminals armed with advanced technologies will increase the likelihood and volume of attacks. Standard tools must be able to scale to address potential increases in attack volumes. These tools also need to be enhanced with artificial intelligence (AI) to detect attack patterns and stop threats in real time. Critical tools should include anti-malware engines using AI detection signatures, endpoint detection and response (EDR), advanced intrusion prevention system (IPS) detection, sandbox solutions augmented with MITRE ATT&CK mappings and next-gen firewalls (NGFWs). In the best-case scenario, these tools are deployed consistently across the distributed network (data center, campus, branch, multi-cloud, home office, endpoint) using an integrated security platform that can detect, share, correlate and respond to threats as a unified solution. Cybercriminals are opportunistic, and they’re also growing increasingly crafty. We’re now seeing them spend more time on the reconnaissance side of cyberattacks. 


Want Real Cybersecurity Progress? Redefine the Security Team

The strategies described above share one trait in common: They all leave security mostly in the hands of an elite security team. No matter how many security tools a business buys, how far left it shifts security, or how many compliance rules it enforces, security operations still remain the realm primarily of security engineers and analysts (perhaps with just a bit of help from developers and IT Ops teams at businesses that take DevSecOps seriously). That fact is part of what makes the concept of collective security so innovative. It fundamentally breaks a mold that has been in place for decades: the mold that forces a single team to “own” security across the entire business, leaving little opportunity for stakeholders who are not security experts to contribute to security initiatives. By shifting to a strategy in which security is everyone’s responsibility — and, just as important, where everyone has the ability to define security rules and validate resources without having to know how to code or use sophisticated security tools — businesses make it possible for everyone to understand the state of cybersecurity in their organization, as well as to help enforce cybersecurity standards.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - February 04, 2022

5 steps to run a successful cybersecurity champions program

A solid program helps to demystify cybersecurity for non-security employees and bring it into the scope of their own specialties. “People often think cybersecurity is an unfathomable, deeply technical concept, but having security champions creates an open dialogue, and couching security issues and threats in ways that are relevant to specific roles is the best way to navigate this issue,” Dr. John Blythe, chartered psychologist and director of cyber workforce psychology at cybersecurity training firm Immersive Labs, tells CSO. “Engineers need to understand how to write secure code, executive teams need to know how to respond better in a cyber crisis, and IT teams need to know how to secure cloud infrastructure.” Once trained, cybersecurity champions can identify and address cybersecurity vulnerabilities before they become widespread and problematic, adds Dominic Grunden, CISO of UnionDigital Bank. “In doing so, they can save organizations significant time and money in the process,” he says. Grunden has integrated and run cybersecurity champions programs across multiple industries and organizations.


Charming Kitten Sharpens Its Claws with PowerShell Backdoor

Charming Kitten is now using what researchers have dubbed PowerLess Backdoor, a previously undocumented PowerShell trojan that supports downloading additional payloads, such as a keylogger and an info stealer. The team also discovered a unique new PowerShell execution process related to the backdoor aimed at slipping past security-detection products, Frank wrote. “The PowerShell code runs in the context of a .NET application, thus not launching ‘powershell.exe’ which enables it to evade security products,” he wrote. Overall, the new tools show Charming Kitten developing more “modular, multi-staged malware” with payload-delivery aimed at “both stealth and efficacy,” Frank noted. The group also is leaning heavily on open-source tools such as cryptography libraries, weaponizing them for payloads and communication encryption, he said. This reliance on open-source tools demonstrates that the APT’s developers likely lack “specialization in any specific coding language” and possess “intermediate coding skills,” Frank observed.


Toward a 3-Stage Software Development Lifecycle

No one wants someone else to find the fault in their code, but at the same time, developers aren’t crazy about running tests, whether that means writing tests or doing manual tests. But developers shouldn’t have to ship their code off to a testing environment to discover if it works or not and worry about the embarrassment of having a colleague find an error. Robust change validation does just that — it allows developers to spot potential errors before the application or update leaves the local environment. Validating changes involves not only testing that the code is good, but also that it works as expected with all dependencies. When a change isn’t validated, the developer knows immediately and can fix the problem and try again immediately. And when a change is validated, the developer can push the code into the production-like environment with confidence, knowing that it’s been tested against upstream and downstream dependencies and won’t cause anything to break unexpectedly. The idea is that as much hardening as possible happens in the development environment so that developers can ship with maximum confidence. 


The Making of Atomic CSS: An Interview With Thierry Koblentz

Everything was fair game and everything was abused as we had a very limited set of tools with the demand to do a lot. But things had changed dramatically by the time I joined Yahoo!. Devs from the U.K. were strong supporters of Web Standards and I credit them for greatly influencing how HTML and CSS were written at Yahoo!. Semantic markup was a reality and CSS was written following the Separation of Concern (SoC) principle to the “T”. YUI had CSS components but did not have a CSS framework yet. ... Every Yahoo! design team had their own view of what was the perfect font size, the perfect margin, etc., and we were constantly receiving requests to add very specific styles to the library. That situation was unmaintainable so we decided to come up with a tool that would let developers create their own styles on the fly, while respecting the Atomic nature of the authoring method. And that’s how Atomizer was born. We stopped worrying about adding styles — CSS declarations — and instead focused on creating a rich vocabulary to give developers a wide array of styling, like media queries, descendant selectors, and pseudo-classes, among other things.


Cyber Signals: Defending against cyber threats with the latest research, insights, and trends

Cyber Signals aggregates insights we see from our research and security teams on the frontlines. This includes analysis from our 24 trillion security signals combined with intelligence we track by monitoring more than 40 nation-state groups and over 140 threat groups. In our first edition, we unpack the topic of identity. Our identities are made up of everything we say and do in our lives, recorded as data that spans across a sea of apps and services. While this delivers great utility, if we don’t maintain good security hygiene our identities are at risk. And over the last year, we have seen identity become the battleground for security. While threats have been rising fast over the past two years, there has been low adoption of strong identity authentication, such as multifactor authentication (MFA) and passwordless solutions. For example, our research shows that across industries, only 22 percent of customers using Microsoft Azure Active Directory (Azure AD), Microsoft’s Cloud Identity Solution, have implemented strong identity authentication protection as of December 2021. 


Bitcoin encryption is safe from quantum computers – for now

In the second part of the study, the team calculated the number of physical qubits needed to break the encryption used for Bitcoin transactions. Marek Narozniak, a physicist at New York University (NYU) in the US who was not part of the study, points out that this question – whether cryptocurrencies are safe against quantum computer attacks – comes with additional constraints not present in the FeMo-co simulation. While a 10-day computation time may be acceptable for FeMo-co simulations, Narozniak notes that the Bitcoin network is set up so that a hacker armed with an error-correcting quantum computer would have a very limited time to decrypt information and steal funds. According to Webber and collaborators, breaking Bitcoin encryption within one hour – a time window within which transactions may be vulnerable – would take about three hundred million qubits. Based on this result, Narozniak concludes that “Bitcoin is pretty safe”, although he warns that not all cryptocurrencies operate the same way. “There are other cryptocurrencies that work differently, and they have different algorithms that could be more vulnerable,” he says.


The 5 Big Risk Vectors of DeFi

Intrinsic protocol risk in DeFi comes in all shapes. In DeFi lending protocols such as Compound or Aave, liquidations is a mechanism that maintains lending markets collateralization at appropriate levels. Liquidations allow participants to take part of the principal in uncollateralized positions. Slippage is another condition present in automated market making (AMM) protocols such as Curve. High slippage conditions in Curve pools can force investors to pay extremely high fees to remove liquidity supplied to a protocol. ... A unique aspect of DeFi, decentralized governance proposals control the behavior of a DeFi protocol and, quite often, are the cause of changes in its liquidity composition in affecting investors. For instance, governance proposals that alter weights in AMM pools or collateralization ratios in lending protocols typically help liquidity flow in or out of the protocol. A more concerning aspect of DeFi governance from the risk perspective is the increasing centralization of the governance structure of many DeFi protocols. Even though DeFi governance models are architecturally decentralized, many of them are controlled by a small number of parties that can influence the outcome of any proposal.


Thousands of Malicious npm Packages Threaten Web Apps

Because npm packages in general are being downloaded upwards of 20 billion times a week—and thus installed across countless web-facing components of software and applications across the world–exploiting them means a sizeable playing field for attackers, researchers said in their Wednesday report. ... That level of activity enables threat actors to launch a number of software supply-chain attacks, researchers said. Accordingly, WhiteSource investigated malicious activity in npm, identifying more than 1,300 malicious packages in 2021 — which were subsequently removed, but may have been brought into any number of applications before they were taken down. “Attackers are focusing more efforts on using npm for their own nefarious purposes and targeting the software supply chain using npm,” they wrote in the report. “In these supply-chain attacks, adversaries are shifting their attacks upstream by infecting existing components that are distributed downstream and installed potentially millions of times.”


Retail in the metaverse: considerations for brands

While the metaverse may be a golden ticket of new opportunities for retailers, there are very real security risks that also need to be considered. For example, in this new, yet to be explored metaverse, there’s the danger of consumers coming across a virtual man wearing a trench coat selling fake designer watches. How can consumers tell if he is selling the genuine article, or fakes? Ensuring your brand and products are protected in the metaverse is going to be a serious challenge. Monitoring for intellectual property (IP) infringement across the various components of the metaverse will not be easy. If consumers have bad experiences in the metaverse, such as buying fake goods, this can cause great harm to their reputation online and in the real world too. This is why it is important that they take steps to protect their brand and IP as well as their customers. It has been suggested that operators within the metaverse should establish strategies for protecting users’ IP, in the same vein to how YouTube, eBay and Amazon work to protect rights holders from illicit activity on their platforms. 


Where have all the global network aggregators gone?

The best way to leverage SD-WAN’s potential for reducing network transport costs is to have a portfolio strategy with respect to selecting internet transport suppliers. Through robust acquisition processes and detailed coverage and cost modeling, enterprises need to come up with a manageable number of vetted suppliers that meet their global transport requirements. Potential suppliers should be evaluated based on provisioning, SLAs, operational and service support, and commercial and contractual terms, for example. Typically, for larger businesses, a portfolio approach may include one global or strong regional aggregator alongside two-to-five other telecom service providers with different provisioning models. Having a small handful of suppliers as go-to partners rather than a single supplier can yield significant cost savings – and those savings can run 30% higher compared to a single-source supplier approach. If managed correctly, a portfolio approach also offers a way to keep performance/pricing pressure on competing suppliers. 



Quote for the day:

"Don't focus so much on who is following you, that you forget to lead." -- E'yen A. Gardner

Daily Tech Digest - February 03, 2022

DeepMind says its new AI coding engine is as good as an average human programmer

A lot of progress has been made developing AI coding systems in recent years, but these systems are far from ready to just take over the work of human programmers. The code they produce is often buggy, and because the systems are usually trained on libraries of public code, they sometimes reproduce material that is copyrighted. In one study of an AI programming tool named Copilot developed by code repository GitHub, researchers found that around 40 percent of its output contained security vulnerabilities. Security analysts have even suggested that bad actors could intentionally write and share code with hidden backdoors online, which then might be used to train AI programs that would insert these errors into future programs. Challenges like these mean that AI coding systems will likely be integrated slowly into the work of programmers — starting as assistants whose suggestions are treated with suspicion before they are trusted to carry out work on their own. In other words: they have an apprenticeship to carry out. But so far, these programs are learning fast.


The evolution of a Mac trojan: UpdateAgent’s progression

UpdateAgent is uniquely characterized by its gradual upgrading of persistence techniques, a key feature that indicates this trojan will likely continue to use more sophisticated techniques in future campaigns. Like many information-stealers found on other platforms, the malware attempts to infiltrate macOS machines to steal data and it is associated with other types of malicious payloads, increasing the chances of multiple infections on a device. The trojan is likely distributed via drive-by downloads or advertisement pop-ups, which impersonate legitimate software such as video applications and support agents. This action of impersonating or bundling itself with legitimate software increases the likelihood that users are tricked into installing the malware. Once installed, UpdateAgent starts to collect system information that is then sent to its command-and-control (C2) server. Notably, the malware’s developer has periodically updated the trojan over the last year to improve upon its initial functions and add new capabilities to the trojan’s toolbox. 


AI technology is redefining surveillance

With new AI-based surveillance tools like facial recognition, businesses can go beyond security to provide their employees with an enhanced workplace experience. With 98% of IT leaders concerned about security challenges related to a hybrid workforce, AI-based surveillance can help better manage staggered and irregular work schedules by seamlessly identifying which employees are supposed to be in specific areas at certain times. This tiered access control can be taken a step further, as an operator can link facial recognition security software to systems that control automated doors or pair it with baseline authentication solutions for an added layer of protection. By the same token, these AI surveillance tools can also be used to directly benefit the workers they observe. Now, a video management system (VMS) can be trained to identify VIPs, authorized personnel, and guests of the company, creating a frictionless entry experience for those security would wish to treat with utmost care. Imagine if every time the CEO walked into the company building, he could stream through security without having to sign in. 


Explained: Prospective learning in AI

Prospective learning is important because many critical problems are novel experiences that come with little information, negligible probability, and high consequences. Unfortunately, such problems precipitate the downfall of AI systems, such as when medical diagnoses systems cannot detect underrepresented diseases in the samples used to train them. Therefore, the challenge with intelligent systems is to distinguish novel experiences, discern the potentially complex ways in which they connect to past experiences, and then act accordingly. ... Constraints, such as built-in priors and inductive biases, shrink the hypothesis space so that the intelligence needs less data and fewer solutions to resolve the current problem. These constraints are built into the system of AI and traditionally come in the form of statistical constraints and computational constraints. The former restricts the space of hypotheses to improve statistical efficiency, thereby reducing the amount of data needed to reach a particular goal. The latter seeks to improve computational efficiency by limiting the amount of space and/or time that an intelligent system has to learn and make deductions.
 

Silent Cyber Risks For Insurers: Can AI Applications Help?

Silent cyber is the term given to a situation in which cyber coverage is implied to be provided to an insured, without the knowledge of the insurer providing the coverage. In simpler words, a silent cyber situation strikes when a courts’ findings are in favour of a policy owner because the policy does not clearly exclude cyber coverage. The incidents of silent cyber surged during the Covid-19 period as ransomware incidents also proliferated. This uptrend in cyber-related insurance claims also raised the chances of cyber risks and exposure onto the insurers as they continue to pay out claims. This has also led to many insurers denying coverage in many cases, even in such situations where a policy did not explicitly cite any cyber coverage. It has also led to an increase in the number of premiums for cyber insurance. To balance out this issue, regulators have issued guidelines to help manage such risks for insurance firms, but such initiatives have not been enough. Surveys have revealed that there is an inherent need for the use of advanced technologies that can bridge this glaring lack of explicit language that can refer to the cyberattacks in insurance policies.


The downside of machine learning in health care

Coming from computers, the product of machine-learning algorithms offers “the sheen of objectivity,” according to Ghassemi. But that can be deceptive and dangerous, because it’s harder to ferret out the faulty data supplied en masse to a computer than it is to discount the recommendations of a single possibly inept (and maybe even racist) doctor. “The problem is not machine learning itself,” she insists. “It’s people. Human caregivers generate bad data sometimes because they are not perfect.” Nevertheless, she still believes that machine learning can offer benefits in health care in terms of more efficient and fairer recommendations and practices. One key to realizing the promise of machine learning in health care is to improve the quality of data, which is no easy task. “Imagine if we could take data from doctors that have the best performance and share that with other doctors that have less training and experience,” Ghassemi says. “We really need to collect this data and audit it.” The challenge here is that the collection of data is not incentivized or rewarded, she notes.


Unpatched Security Bugs in Medical Wearables Allow Patient Tracking, Data Theft

Besides just the device, Kaspersky reported finding concerning flaws in the most common wearable device platform, Qualcomm Snapdragon Wearable. The platform has been riddled with bugs, the team added, bringing the total number of vulnerabilities found in the platform since it was launched in 2020 to 400 — many still unpatched. This makes for an enormous, vulnerable attack surface across the healthcare sector, while attacks are getting more frequent, brazen and destructive. It’s up to hospitals and medical service providers to build telehealth systems with security in mind, Nate Warfield, CTO of Prevailion wrote in Threatpost last summer. He called on the private sector to lend a hand to shore up critical healthcare infrastructure, and lauded groups like CTI League, COVID-19 Cyber Threat Coalition, formed at the beginning of the pandemic, to share threat intelligence against a rising threat of attack. “Cyber-threats to healthcare won’t slow down, even after the pandemic is over,” Warfield explained. “Hospitals need to take more aggressive action to fortify themselves against these attacks…They also need to increase their investments in cybersecurity.”


New Open-Source Multi-Cloud Asset to build SaaS

Regulated industries such as financial institutions, insurance, healthcare and more, all want the advantages of a hybrid cloud, but need assurance they can protect their assets and maintain compliance with industry and regulatory requirements. The key to hosting regulated workloads in the cloud is to eliminate and mitigate the risks that might be standing in the way of progress. In regulated industries, critical risks fall into the general categories of compliance, cybersecurity, governance, business continuity and resilience. The DevSecOps approach of our CI/CD pipelines are based an IBM DevSecOps reference architecture, helping to address some of the risks faced by regulated industries. The CI/CD pipelines include steps to collect and upload deployment log files, artifacts, and evidence to a secure evidence locker. In addition, a toolchain integration to IBM Security and Compliance Center verifies the security and compliance posture of the toolchain by identifying the location of the evidence locker, and the presence of the evidence information.


Decentralized technology will end the Web3 privacy conundrum

It’s not just willful negligence, of course. There is a good technical reason that web applications today are unable to execute on existing blockchain architectures. Because all participants are currently forced to re-execute all transactions in order to verify the state of their ledger, every service on a blockchain is effectively time-sharing a single, finite, global compute resource. Another reason that privacy has not been prioritized is that it’s very hard to guarantee. Historically, privacy tools have been slow and inefficient, and making them more scalable is hard work. But just because privacy is hard to implement doesn’t mean it shouldn’t be a priority. The first step is to make privacy simpler for the user. Achieving privacy in crypto should not require clunky workarounds, shady tools or a deep expertise of complex cryptography. Blockchain networks, including smart contract platforms, should support optional privacy that works as easily as clicking a button. Blockchain technology is poised to answer these calls with security measures that guarantee utmost privacy with social accountability.


Four tips to increase executive buy-in to disaster recovery

One of the core issues when it comes to communicating technology concerns to a business audience is the use of appropriate vocabulary and the ability to communicate context. Tech-rich terminology will immediately switch off those that don’t understand it and ambiguous references that don’t adequately explain the impact to business or the everyday prevalence of security threats, will fall on deaf ears. In terms of disaster recovery, the word “disaster”, for example, is often associated with low probability events such as a widespread outage due to an earthquake, flood or act of terrorism, and fails to adequately communicate the prevalence of data loss events. In reality, however, most downtime is caused by mundane, everyday events such as hardware failure, human error, severe weather or power outages. This has become even more the case since the pandemic has driven widespread adoption of hybrid and home working. As employees work remotely in greater frequency, employee-based incidents are increasingly on the rise, wreaking havoc on IT environments.



Quote for the day:

"If the owner of the land leads you, you cannot get lost." -- Ugandan Proverb

Daily Tech Digest - February 02, 2022

These hackers are hitting victims with ransomware in an attempt to cover their tracks

Once installed on a compromised machine, PowerLess allows attackers to download additional payloads, and steal information, while a keylogging tool sends all the keystrokes entered by the user direct to the attacker. Analysis of PowerLess backdoor campaigns appear to link attacks to tools, techniques and motivations associated with Phosphorus campaigns. In addition to this, analysis of the activity seems to link the Phosphorus threat group to ransomware attacks. One of the IP addresses being used in the campaigns also serves as a command and control server for the recently discovered Momento ransomware, leading researchers to suggest there could be a link between the ransomware attacks and state-backed activity. "A connection between Phosphorus and the Memento ransomware was also found through mutual TTP patterns and attack infrastructure, strengthening the connection between this previously unattributed ransomware and the Phosphorus group," said the report. Cybereason also found a link between a second Iranian hacking operation, named Moses Staff, and additional ransomware attacks, which are deployed with the aid of another newly identified trojan backdoor, dubbed StrifeWater.


Managing Technical Debt in a Microservice Architecture

Paying down technical debt while maintaining a competitive velocity delivering features can be difficult, and it only gets worse as system architectures get larger. Managing technical debt for dozens or hundreds of microservices is much more complicated than for a single service, and the risks associated with not paying it down grow faster. Every software company gets to a point where dealing with technical debt becomes inevitable. At Optum Digital, a portfolio – also known as a software product line – is a collection of products that, in combination, serve a specific need. Multiple teams get assigned to each product, typically aligned with a software client or backend service. There are also teams for more platform-oriented services that function across several portfolios. Each team most likely is responsible for various software repositories. There are more than 700 engineers developing hundreds of microservices. They take technical debt very seriously because the risks of it getting out of control are very real.


How to approach modern data management and protection

European tech offers a serious alternative to US and Chinese models when it comes to data. It’s also a necessary alternative and must have an evolution towards European technologic autonomy, according to D’urso. “The loss of economic autonomy will impact political power. In other words, data and economic frailty will only further weaken Europe’s role at the global power table and open the door to a variety of potential flash points (military, cyber, industrial, social and so on). “Europe should be proud of its model, which re-injects tax revenues into a fair and respectful social and cultural framework. The GDPR policy is clearly at the heart of a European digital mindset.” Luc went further and suggested that data regulation, including management, protection and storage, is central to the upcoming French presidential election and the current French Presidency of the Council of the European Union. “The French Presidency of the Council of the EU will clearly place data protection into the spotlight of political debates. It is not about protectionism, but Europe must safeguard its data against foreign competition to enhance its autonomy and build a prosperous future.


Edge computing strategy: 5 potential gaps to watch for

Edge strategies that depend on one-off “snowflake” patterns for their success will cause long-term headaches. This is another area where experience with hybrid cloud architecture will likely benefit edge thinking: If you already understand the importance of automation and repeatability to, say, running hundreds of containers in production, then you’ll see a similar value in terms of edge computing. “Follow a standardized architecture and avoid fragmentation – the nightmare of managing hundreds of different types of systems,” advises Shahed Mazumder, global director, telecom solutions at Aerospike. “Consistency and predictability will be key in edge deployments, just like they are key in cloud-based deployments.” Indeed, this is an area where the cloud-edge relationship deepens. Some of the same approaches that make hybrid cloud both beneficial and practical will carry forward to the edge, for example. In general, if you’ve already been solving some of the complexity involved in hybrid cloud or multi-cloud environments, then you’re on the right path.


Top Scam-Fighting Tactics for Financial Services Firms

At its core, a scam is a situation in which the customer has been duped into initiating a fraudulent transaction that they believe to be authentic. Applying traditional controls for verifying or authenticating the activity may therefore fail. But the underlying ability to detect the anomaly remains critical. "Instead of validating the transaction or the individual, we are going to have to place more importance on helping the customer understand that what they believe to be legitimate is actually a lie," Mitchell of Omega FinCrime says. He says fraud operations teams will need to become more customer-centric, education-focused and careful in their interactions. Mitigating a scam apocalypse will require mobilization across the market, which includes financial institutions, solution providers, payment networks, regulators, telecom carriers, social media companies and law enforcement agencies. In the short term, investment priorities must expand beyond identity controls to include orchestration controls and decision support systems that allow financial institutions to see the interaction more holistically, Fooshee says.


Better Integration Testing With Spring Cloud Contract

Imagine a simple microservice with a producer and a consumer. When writing tests in the consumer project, you have to write mocks or stubs that model the behavior of the producer project. Conversely, when you write tests in the producer project, you have to write mocks or stubs that model the behavior of the consumer project. As such, multiple sets of related, redundant code have to be maintained in parallel in disconnected projects. ... “Mock” gets used in ways online that is somewhat generic, meaning any fake object used for testing, and this can get confusing when differentiating “mocks” from “stubs”. However, specifically, a “mock” is an object that tests for behavior by registering expected method calls before a test run. In contrast, a “stub” is a testable version of the object with callable methods that return pre-set values. Thus, a mock checks to see if the object being tested makes an expected sequence of calls to the object being mocked, and throws an error if the behavior deviates (that is, makes any unexpected calls). A stub does not do any testing itself, per se, but instead will return canned responses to pre-determined methods to allow tests to run.


Now for the hard part: Deploying AI at scale

Fortunately, AI tools and platforms have evolved to the point in which more governable, assembly-line approaches to development are possible, most of which are being harnessed under the still-evolving MLOps model. MLOps is already helping to cut the development cycle for AI projects from months, and sometimes years, down to as little as two weeks. Using standardized components and other reusable assets, organizations are able to create consistently reliable products with all the embedded security and governance policies needed to scale them up quickly and easily. Full scalability will not happen overnight, of course. Accenture’s Michael Lyman, North American lead for strategy and consulting, says there are three phases of AI implementation. ... To accelerate this process, he recommends a series of steps, such as starting out with the best use cases and then drafting a playbook to help guide managers through the training and development process. From there, you’ll need to hone your institutional skills around key functions like data and security analysis, process automation and the like.


How to measure security efforts and have your ideas approved

In the world of cybersecurity, the most frequently asked question focuses on “who” is behind a particular attack or intrusion – and may also delve into the “why”. We want to know whom the threat actor or threat agent is, whether it is a nation state, organized crime, an insider, or some organization to which we can ascribe blame for what occurred and for the damage inflicted. Those less familiar with cyberattacks may often ask, “Why did they hack me?” As someone who has been responsible for managing information risk and security in the enterprise for 20-plus years, I can assure you that I have no real influence over threat actors and threat agents – the “threat” part of the above equation. These questions are rarely helpful, providing only psychological comfort, like a blanket for an anxious child, and quite often distract us from asking the one question that can really make a difference: “HOW did this happen?” But even those who asked HOW – have answered with simple vulnerabilities – we had an unpatched system, we lacked MFA, or the user clicked on a link.

Data analysts are one of the data consumers. A data analyst answers questions about the present such as: what is going on now? What are the causes? Can you show me XYZ? What should we do to avoid/achieve ABC? What is the trend in the past 3 years? Is our product doing well? ... Data scientists are another data consumer. Instead of answering questions about the present, they try to find patterns in the data and answer the questions about the future, i.e prediction. This technique has actually existed for a long time. You must have heard of it, it’s called statistics. Machine learning and deep learning are the 2 most popular ways to utilise the power of computers to find patterns in data. ... How do data analysts and scientists get the data? How does the data come from user behaviour to the database? How do we make sure the data is accountable? The answer is data engineers. Data consumers cannot perform their work without having data engineers set up the whole structure. They build data pipelines to ingest data from users’ devices to the cloud then to the database.


The Value of Machine Unlearning for Businesses, Fairness, and Freedom

Another way machine unlearning could deliver value for both individuals and organizations is the removal of biased data points that are identified after model training. Despite laws that prohibit the use of sensitive data in decision-making algorithms, there is a multitude of ways bias can find its way in through the back door, leading to unfair outcomes for minority groups and individuals. There are also similar risks in other industries, such as healthcare. When a decision can mean the difference between life-changing and, in some cases, life-saving outcomes, algorithmic fairness becomes a social responsibility and often algorithms may be unfair due to the data they are being trained on. For this reason, financial inclusion is an area that is rightly a key focus for financial institutions, and not just for the sake of social responsibility. Challengers and fintechs continue to innovate solutions that are making financial services more accessible. From a model monitoring perspective, machine unlearning could also safeguard against model degradation.



Quote for the day:

"Good leaders make people feel that they're at the very heart of things, not at the periphery." -- Warren G. Bennis

Daily Tech Digest - February 01, 2022

Why the metaverse experience isn’t quite a CX revolution yet

The metaverse aims to solve a key CX sticking point with the current state of the Internet, which is a lack of interactive, engaging, and immersive experiences. But there are a number of startups and companies working on improving the Internet experience without the need for a metaverse. From interactive content to immersive experiences, the future of the Internet is looking a lot more interesting, and it all relies on data. In other words, we can use data to create innovative customer experiences, the metaverse notwithstanding. For one, the current state of internet content is largely static. Websites are filled with text, images, and videos, and there’s very little that allows users to deeply interact with and change the course of the content. This is changing with the rise of interactive content, such as quizzes, polls, surveys, and games. One of the best ways to use interactive content is to create customer feedback loops. This involves using interactive content as a way to gather customer data, which can then be used to improve the customer experience. There are several ways to do this, but one of the most common is to use quizzes and surveys. You can gather data on customer preferences, demographics, and even purchase history.


Solana Uses Rust to Pull in Developers and Avoid Copypasta

Yakovenko, who is the engineering brains of Solana, firstly noted Rust’s popularity. “​It’s not like we picked Haskell or something,” he joked (a dig at Solana’s rival blockchain, Cardano, currently ranked 6th on CoinMarketCap, which did choose Haskell). He went on to explain why they didn’t choose to build with Solidity and the Ethereum Virtual Machine (EVM). “The hard part with EVM,” he said, “is are you gonna get, like, really smart people full-time thinking about […] how do I build in scale? […] Or are you just going to get somebody that copies something from Solidity and then slaps a token on it?” What Yakovenko is getting at is that Solidity, in his opinion at least, attracts developers that are more likely to copy-and-paste smart contract code from existing blockchain projects (a practice known colloquially as “copypasta”). So by choosing Rust, which is a harder language to learn than Solidity and is much more likely to be used by professional programmers, they are hoping to attract developers who can build custom, scalable programs.


Success of web3 hinges on remedying its security challenges

The success of web3 hinges on innovation to solve new security challenges created by different application architectures. In web3, decentralized applications or “dApps” are built without relying on the traditional application logic and database layers that exist in Web 2.0; instead, a blockchain, network nodes, and smart contracts are used to manage logic and state. Users still access a front end, which connects to those nodes, to update data such as publishing new content or making a purchase. These activities require users to sign transactions using their private keys, typically managed with a wallet, a model that is intended to preserve user control and privacy. Transactions on the blockchain are fully transparent, publicly accessible and immutable (meaning they cannot be changed). Like any system, this design has security trade-offs. The blockchain does not require actors to be trusted as in Web 2.0, but making updates to address security problems is harder. Users get to maintain control over their identities, but no intermediaries exist to provide recourse in the event of attacks or key compromises.


Rust-Coded Malware Key Factor in BlackCat's Meteoric Rise

According to some researchers, one of the key factors responsible for BlackCat's success and rapid growth is reported to be usage of the Rust programming language in its malware code. Security researchers from cybersecurity firm Recorded Future say that BlackCat is the first professional ransomware group to use Rust. The first ransomware strain coded in Rust as a proof-of-concept was released on GitHub in 2020. In May 2021, the Buer Dropper malware was updated using a code written in Rust. At the time, researchers from cybersecurity firm Proofpoint told ISMG that the new code, named RustyBuer, made the modified malware version harder to detect. Photon researchers tell ISMG that ransomware is most commonly coded in C, C++ or Go. But they add that Rust has many advantages: "It features good performance but more crucially, secure memory management, which reduces the probability that the malware will crash before it can be executed." Several other languages use a garbage collector to clean unused memory spaces automatically, but that trades off some performance, the researchers say.


Understaffing persistently impacting enterprise privacy teams

“People are an essential component of any privacy program, both the privacy professionals driving the work forward and employees across the enterprise who follow good data privacy practices,” says Safia Kazi, ISACA Privacy Professional Practice Advisor. “Enterprises need to sufficiently invest in their privacy programs and teams, not only to retain privacy staff and upskill talent to fill open roles, but to also prioritize privacy training efforts to ensure all employees are supporting privacy initiatives.” Despite issues with staffing and skills gaps, 41 percent of respondents report they are very confident or completely confident in the ability of their privacy team to ensure data privacy and achieve compliance with new privacy laws and regulations. One in 10 respondents’ enterprises have experienced a material privacy breach in the last 12 months, consistent with last year’s results. ... “Privacy professionals are vital in driving transparency and accountability across their organizations, and that has never been more important, as more consumers, employees and investors dictate the success of organizations that they do, or don’t, trust,” notes Alex Bermudez, OneTrust Privacy Manager.


Crypto and NFTs: A New Digital Footprint for Enterprises?

There can be a dark side to this new frontier. Increased use of decentralized assets has brought concerns of exploits -- such as cryptocurrency money laundering. The Financial Action Task Force (FATF), a global organization that combats money laundering, has been looking at how to mitigate risks around cryptocurrency, says Malcolm Wright, head of strategy for global regulatory and compliance solutions with Shyft Network. Regulators in the United States already had policy and guidance in this arena in place, he says, but the task force proposed wider suggestions on dealing with these emerging issues. “FATF doesn’t create legislation but it does lay down recommendations and evaluates countries against how well they’ve implemented them,” Wright says. Shyft is a public protocol for validating identity to secure cryptocurrency, establishing trust in blockchain data. FATF laid out recommendations in 2019 with the expectation that countries would regulate within two years and industry would look to comply within two years, he says. This was a way to mitigate the risk that illicit actors pose.


Nothingness at the top of the heap

Business leaders faced with these issues, as most people sooner or later are, will want to consult a brilliant and energetic executive by the name of David Levinsky. You’ll find a few by that name on LinkedIn, of course, but the one you most need to know exists only between covers. The Rise of David Levinsky is probably the first important American Jewish novel, yet its themes—the ambiguous nature of “success,” among them—remain as timely and universal as when the book was first published more than a century ago, in 1917. If that were all Levinsky offered, it would be enough, as we sing at Passover. But the book delivers far more, succeeding as literature even as it illustrates the difficulties of starting a business in a new country, the importance of manufacturing in fostering innovation, the social tensions that arise from extreme inequality, and the effects of imposter syndrome—for Levinsky, a lifelong affliction. “At the bottom of my heart,” he will reflect as a middle-aged tycoon who employs his own chauffeur, “I cow before waiters to this day.


Better Metrics for Building High Performance Teams

Not surprisingly, VP’s of Engineering get a lot of pressure to quantify value and productivity from the engineering department for bosses or peers who don’t often have the same perspective or skillset, and these metrics are an easy way to understand activity in the field. But there’s a big issue: Lines of code rewards bloated solutions, and likewise, optimizing for faster cycle time may not solve a problem’s root cause. These metrics are often used to compare individuals against each other and teams to one another. These are dangerous paths to walk down: you will quickly fail to see the realities the team or individual may be facing (poor onboarding perhaps, or even that someone has been pulled off a project to help on something else - making the dashboard look bad.) When you begin surveilling engineers to get this kind of data, and then making management decisions based on that flawed information, you’ll drive turnover and discontent in the organization.


2022: the year of the open source tiger

Open source brings economic savings, that’s true, but beyond this it supports societal values including collaboration, skills development and improved and diverse innovation. The UK Government’s digital focus in 2022 will be on “Funding, People and Ideas,” according to Digital Minister, Chris Philp. The values of open source align directly with this ambition. Talent will be actively nurtured, and the UK financial sector will be tasked with a significant increase in its investment in digital start-ups through UK pension funds. The current 12% level will be driven towards the 65% US equivalent pension. This increase in investment should create a larger pot for all, helping to support businesses in establishing their operations and building sustainable models that take advantage of circular economy models, rather than than going for all-out perpetual growth. With a greater focus on scaling UK digital businesses, some of the Brexit pain will be alleviated with a new fast track visa route launching in March or April, offered to organisations demonstrating 20% financial or people growth over the last three years.


Hybrid work: 5 tips for prioritizing the employee experience

In a broader sense, IT plays one of the most important roles in removing any employee friction. We have opportunities to automate or digitize manual processes, remove duplicative systems or logins, and fix systems that don’t talk to one another. We can avoid pushing out patches when they’re in the middle of something important, we can reduce the time they spend on a help desk call, and we can simplify how many logins they must manage. When we proactively remove these big rocks from the employee’s backpacks, we lighten their load and enable them to focus on more value-added work. Some of the opportunities IT tackles to improve the employee experience wouldn’t be labeled as groundbreaking, cutting-edge initiatives. But it’s necessary work. Take time to sit and talk with your employees and get as much data as you can out of your help desk and ticketing systems. This will help you identify trends and proactively tackle problems in the employee experience – before your employees get so burned out that they leave. When the organization knows you care about the “little” things, you’ll earn the trust you need to tackle the big things.



Quote for the day:

"A pat on the back is only a few vertebrae removed from a kick in the pants, but is miles ahead in results." -- W. Wilcox