Daily Tech Digest - December 12, 2021

AWS Among 12 Cloud Services Affected by Flaws in Eltima SDK

USB Over Ethernet enables sharing of multiple USB devices over Ethernet, so that users can connect to devices such as webcams on remote machines anywhere in the world as if the devices were physically plugged into their own computers. The flaws are in the USB Over Ethernet function of the Eltima SDK, not in the cloud services themselves, but because of code-sharing between the server side and the end user apps, they affect both clients – such as laptops and desktops running Amazon WorkSpaces software – and cloud-based machine instances that rely on services such as Amazon Nimble Studio AMI, that run in the Amazon cloud. The flaws allow attackers to escalate privileges so that they can launch a slew of malicious actions, including to kick the knees off the very security products that users depend on for protection. Specifically, the vulnerabilities can be used to “disable security products, overwrite system components, corrupt the operating system or perform malicious operations unimpeded,” SentinelOne senior security researcher Kasif Dekel said in a report published on Tuesday.


Rust in the Linux Kernel: ‘Good Enough’

When we first looked at the idea of Rust in the Linux kernel, it was noted that the objective was not to rewrite the kernel’s 25 million lines of code in Rust, but rather to augment new developments with the more memory-safe language than the standard C normally used in Linux development. Part of the issue with using Rust is that Rust is compiled based on LLVM, as opposed to GCC, and subsequently supports fewer architectures. This is a problem we saw play out when the Python cryptography library replaced some old C code with Rust, leading to a situation where certain architectures would not be supported. Hence, using Rust for drivers would limit the impact of this particular limitation. Ojeda further noted that the Rust for Linux project has been invited to a number of conferences and events this past year, and even garnered some support from Red Hat, which joins Arm, Google, and Microsoft in supporting the effort. According to Ojeda, Red Hat says that “there is interest in using Rust for kernel work that Red Hat is considering.”


DeepMind tests the limits of large AI language systems with 280-billion-parameter model

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language’s models size and complexity, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia’s Megatron model (530 billion parameters). It’s generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix. “I think right now it really looks like the model can fail in variety of ways,” said Rae.


2022 transformations promise better builders, automation, robotics

The Great Resignation is real, and it has affected the logistics industry more than anyone realizes. People don’t want low-paying and difficult jobs when there’s a global marketplace where they can find better work. Automation will be seen as a way to address this, and in 2022, we will see a lot of tech VC investment in automation and robotics. Some say SpaceX and Virgin can deliver cargo via orbit, but I think that’s ridiculous. What we need, (and what I think will be funded in 2022, are more electric and autonomous vehicles like eVTOL, a company that is innovating the “air mobility” market. According to eVTOL’s website, the U.S. Department of Defense has awarded $6 million to the City of Springfield, Ohio, for a National Advanced Air Mobility Center of Excellence. ... In 2022 transformations, grocery will cease to be an in-store retail experience only, and the sector will be as virtual and digitally-driven as the best of them. Things get interesting when we combine locker pickup, virtual grocery, and automated last-mile delivery using autonomous vehicles that can deliver within a mile of the warehouse or store.


Penetration testing explained: How ethical hackers simulate attacks

In a broad sense, a penetration test works in exactly the same way that a real attempt to breach an organization's systems would. The pen testers begin by examining and fingerprinting the hosts, ports, and network services associated with the target organization. They will then research potential vulnerabilities in this attack surface, and that research might suggest further, more detailed probes into the target system. Eventually, they'll attempt to breach their target's perimeter and get access to protected data or gain control of their systems. The details, of course, can vary a lot; there are different types of penetration tests, and we'll discuss the variations in the next section. But it's important to note first that the exact type of test conducted and the scope of the simulated attack needs to be agreed upon in advance between the testers and the target organization. A penetration test that successfully breaches an organization's important systems or data can cause a great deal of resentment or embarrassment among that organization's IT or security leadership


EV charging in underground carparks is hard. Blockchain to the rescue

According to Bharadwaj, the concrete and steel environment effectively acted as a “Faraday cage,” which meant that the EV chargers wouldn’t talk to people’s mobile phones when they tried to initiate charging. You could find yourself stranded, unable to charge your car. “So we had to innovate.” ... As with any EV charging, a payment app connects your car to the EV charger. With Xeal, the use of NFC means the only time you need the Internet is to download the app in the first instance to create a profile that includes their personal and vehicle information and payment details. You then receive a cryptographic token on your mobile phone that authenticates your identity and enables you to access all of Xeal’s public charging stations. The token is time-bound, which means it dissolves after use. To charge your car, you hold your phone up to the charger. Your mobile reads the cryptographic token, automatically bringing up an NFC scanner. It opens the app, authenticates your charging session, starts scanning, and within milliseconds, the charging session starts.


Top 8 AI and ML Trends to Watch in 2022

The scarcity of skilled AI developers or engineers stands as a major barrier to adopting AI technology in many companies. No-code and low-code technologies come to the rescue. These solutions aim to offer simple interfaces, in theory, to develop highly complex AI systems. Today, web design and no-code user interface (UI) tools let users create web pages simply by dragging and dropping graphical elements together. Similarly, no-code AI technology allows developers to create intelligent AI systems by simply merging different ready-made modules and feeding them industrial domain-specific data. Furthermore, NLP, low-code, and no-code technologies will soon enable us to instruct complex machines with our voice or written instructions. These advancements will result in the “democratization” of AI, ML, and data technologies. ... In 2022, with the aid of AI and ML technologies, more businesses will automate multiple yet repetitive processes that involve large volumes of information and data. In the coming years, an increased rate of automation can be seen in various industries using robotic process automation (RPA) and intelligent business process management software (iBPMS). 


The limitations of scaling up AI language models

Large language models like OpenAI’s GPT-3 show an aptitude for generating humanlike text and code, automatically writing emails and articles, composing poetry, and fixing bugs in software. But the dominant approach to developing these models involves leveraging massive computational resources, which has consequences. Beyond the fact that training and deploying large language models can incur high technical costs, the requirements put the models beyond the reach of many organizations and institutions. Scaling also doesn’t resolve the major problem of model bias and toxicity, which often creeps in from the data used to train the models. In a panel during the Conference on Neural Information Processing Systems (NeurIPS) 2021, experts from the field discussed how the research community should adapt as progress in language models continues to be driven by scaled-up algorithms. The panelists explored how to ensure that smaller institutions and can meaningfully research and audit large-scale systems, as well as ways that they can help to ensure that the systems behave as intended.


Here are three ways distributed ledger technology can transform markets

While firms have narrowed their scope to address more targeted pain points, the increased digitalisation of assets is helping to drive interest in the adoption of DLT in new ways. Previous talk of mass disruption of the financial system has given way to more realistic, but still transformative, discussions around how DLT could open doors to a new era of business workflows, enabling transactional exchanges of assets and payments to be recorded, linked, and traced throughout their entire lifecycle. DLT’s true potential rests with its ability to eliminate traditional “data silos”, so that parties no longer need to build separate recording systems, each holding a copy of their version of “the truth”. This inefficiency leads to time delays, increased costs and data quality issues. In addition, the technology can enhance security and resilience, and would give regulators real-time access to ledger transactions to monitor and mitigate risk more effectively. In recent years, we have been pursuing a number of DLT-based opportunities, helping us understand where we believe the technology can deliver maximum value while retaining the highest levels of risk management.


To identity and beyond—One architect's viewpoint

Simple is often better: You can do (almost) anything with technology, but it doesn't mean you should. Especially in the security space, many customers overengineer solutions. I like this video from Google’s Stripe conference to underscore this point. People, process, technology: Design for people to enhance process, not tech first. There are no "perfect" solutions. We need to balance various risk factors and decisions will be different for each business. Too many customers design an approach that their users later avoid. Focus on 'why' first and 'how' later: Be the annoying 7-yr old kid with a million questions. We can't arrive at the right answer if we don't know the right questions to ask. Lots of customers make assumptions on how things need to work instead of defining the business problem. There are always multiple paths that can be taken. Long tail of past best practices: Recognize that best practices are changing at light speed. 



Quote for the day:

"Eventually relationships determine the size and the length of leadership." -- John C. Maxwell

Daily Tech Digest - December 11, 2021

Why a Little-Known Blockchain-Based Identity Project in Ethiopia Should Concern Us All

We have countless examples of the dangers of national ID schemes in general, including from Kenya, Uganda, Pakistan, India and elsewhere. But while national ID schemes can be highly problematic, building them on blockchain could be catastrophic. Putting aside the very obvious logistical hurdles, including very low internet penetration rates in Ethiopia (that are significantly lower in more rural regions) and the displacement of children from schools due to ongoing conflict and humanitarian challenges, there are much deeper problems with Hoskinson’s plans. Blockchain is fundamentally an accounting technology designed to track and trace digital assets through an immutable ledger of transactions. Blockchain-based ID schemes similarly treat identity as a transactional, mathematical problem. The more transactions, the more profitable for the network. There are also serious privacy and data protection concerns with the logging of all this metadata. While proponents of blockchain-based ID claim that concerns are unfounded if the system is designed correctly and identity documents are kept off ledger, the dangers of metadata in this context are well-documented.


Everyone is burned out. That's becoming a security nightmare

In many organisations, it's cybersecurity staff who are there to counter activity that could make the network vulnerable to cyberattacks – but according to the paper, cybersecurity professionals are more burned out than other workers. The research suggests that 84% of security professionals are feeling burned out, compared with 80% of other workers. And when cybersecurity employees are burned out, they're more than likely to describe themselves as "completely checked out" and "doing the bare minimum at work" – something that one in 10 cybersecurity professionals described as their state of mind compared with one in 20 of other employees. That attitude could easily result in security threats being missed or flaws not being fixed in time, something that could put the whole company at risk from cyber incidents. "Pandemic-fueled burnout – and resultant workplace apathy and distraction – has emerged as the next significant security risk," said Jeff Shiner, chief executive officer at 1Password. "It's particularly surprising to find that burned-out security leaders, charged with protecting businesses, are doing a far worse job of following security guidelines – and putting companies at risk".


How Can We Get​ Blockchains to Talk to Each Other?

Solving this problem is a booming area of research though, and last month Schulte and his colleagues presented a potential workaround at the IEEE International Conference on Blockchain Computing and Applications. Their approach relies on blockchain relays, which are essentially smart contracts running on one blockchain that can verify events on another blockchain. If a user wants to transfer an asset they first destroy, or “burn,” it on the source blockchain, which is typically done by sending the asset to a user address that doesn’t exist. This transaction also includes details of the asset and which blockchain and user they want to send it to. Third parties monitor the source blockchain for these burn transactions and then send them to the relay for a small reward, which verifies the burn transaction and recreates the asset on the new blockchain. The challenge, says Schulte, is that these verification processes invoke transaction fees that can quickly make the approach impractical. So they created a verification on-demand system where the relay assumes transactions are valid unless they are disputed. 


DeFi architect Andre Cronje said it’s time to give up on the inaccurate term “decentralized finance”

“We aren’t decentralized, the old guard will keep trying to use it as their “attack” vector,” he added in a disheartening tone, as he proposed a couple of alternative coined terms. According to Cronje, “open finance” or “web3 finance” present some better-suited options that would describe the sector more accurately. Cronje’s unreserved commentary is tough to challenge–thanks to his vast experience and track record. After launching Yearn in 2020, Andre made a move that granted him a somewhat legendary status in the crypto community–he chose to distribute all YFI tokens amongst liquidity providers, without withholding any for himself, or the Yearn development fund. Some of Cronje’s recent projects include the decentralized stablecoin exchange protocol Fixed Forex, and Keep3r Network, which facilitates the interaction between those looking for external developers and job executors–known as Keepers. He was also involved in developing Fantom–a highly scalable Layer 1 blockchain.


DevOps Teams Struggling to Keep Secrets

From Carson’s perspective, secrets management is the ability to move away from hardcoded passwords or static keys to just-in-time privileges or one-time-use passwords so even when comprised they cannot be used. “Many privileged access management solutions that protected privileged access for years have extended functionality to developers to help move the value into DevOps so they can manage credentials for applications, databases, CI/CD tools and services without causing friction in the development process,” he said. Approaches like privileged access security helps enable API-as-a-service and provides instant availability of secrets, SSH keys, certificates, API keys and tokens. Bambenek added the problem isn’t choosing a secrets management process or tool, but rather that they aren’t in place at all. “Pick something that will keep keys and secrets out of public cloud repositories that developers will use that allows for quick and easy rotation of keys as the need arises,” he said. 


DeepMind debuts massive language A.I. that approaches human-level reading comprehension

DeepMind’s language model, which it calls Gopher, was significantly more accurate than these existing ultra-large language models on many tasks, particularly answering questions about specialized subjects like science and the humanities, and equal or nearly equal to them in others, such as logical reasoning and mathematics, according to the data DeepMind published. This was the case despite the fact that Gopher is smaller than some ultra-large language software. Gopher has some 280 billion different parameters, or variables that it can tune. That makes it larger than OpenAI’s GPT-3, which has 175 billion. But it is smaller than a system that Microsoft and Nivida collaborated on earlier this year, called Megatron, that has 535 billion, as well as ones constructed by Google, with 1.6 trillion parameters, and Alibaba, with 10 trillion. Ultra-large language models have big implications for business: they have already lead to more fluent chatbots and digital assistants, more accurate translation software, better search engines, and programs that can summarize complex documents.


Dangerous “Log4j” security vulnerability affects everything from Apple to Minecraft

This vulnerability was discovered by Chen Zhaojun of the Alibaba Cloud Security Team. Any service that logs user-controlled strings was vulnerable to the exploit. The logging of user-controlled strings is a common practice by system administrators in order to spot potential platform abuse, though those strings should then be “sanitized” — the process of cleaning user input to ensure that there is nothing harmful to the software being submitted. The exploit has been dubbed “Log4Shell”, as it’s an unauthenticated RCE vulnerability that allows for total system takeover. There’s already a proof-of-concept exploit online, and it’s ridiculously easy to demonstrate that it works through the use of DNS logging software. If you remember the Heartbleed vulnerability from a number of years ago, Log4Shell definitely gives it a run for its money when it comes to severity. “Similarly to other high-profile vulnerabilities such as Heartbleed and Shellshock, we believe there will be an increasing number of vulnerable products discovered in the weeks to come,” the Randori Attack Team said in their blog today.


It’s time for tech to embrace security by design

Basic cybersecurity hygiene is the key to protecting your devices against the most common types of malware, but we also need security built into technology to prevent these sophisticated cyberattacks. The Secret Service is certainly best known for protecting the president. But its other primary mission is to safeguard the nation’s financial infrastructure and payment systems to preserve the integrity of the economy from a wide range of financial and electronic crimes, including U.S. counterfeit currency, bank and financial institution fraud, illicit financing operations, identity theft, access device fraud and cybercrimes. With the prevalence of mobile devices in today’s world, that means that, as the Department of Homeland Security (DHS) recommends, “users should avoid — and enterprises should prohibit on their devices — sideloading of apps and the use of unauthorized app stores.” The pandemic has been a boon to cybercriminals, taking “advantage of an opportunity to profit from our dependence on technology to go on an internet crime spree,” said Paul Abbate, deputy director of the Federal Bureau of Investigation.


Simulating matter on the quantum scale with AI

Although DFT proves a mapping exists, for more than 50 years the exact nature of this mapping between electron density and interaction energy — the so-called density functional — has remained unknown and has to be approximated. Despite the fact that DFT intrinsically involves a level of approximation, it is the only practical method to study how and why matter behaves in a certain way at the microscopic level and has therefore become one of the most widely used techniques in all of science. Over the years, researchers have proposed many approximations to the exact functional with varying levels of accuracy. Despite their popularity, all of these approximations suffer from systematic errors because they fail to capture certain crucial mathematical properties of the exact functional. By expressing the functional as a neural network and incorporating these exact properties into the training data, we learn functionals free from important systematic errors — resulting in a better description of a broad class of chemical reactions.


A Paradigm Shift in App Delivery

As the shift to cloud accelerates, organizations are also looking for ways to reduce risk as they deliver apps over the cloud. “I think recently the pandemic has made every digital business an experience-delivery company,” Gupta said. “If you talked about transition to cloud and SaaS a few years back, everybody was going towards it. But the question now is how fast I can go, and how confidently while reducing the risk I can achieve with a hyper transition to the cloud and it’s [creation of] a lot of new opportunities and challenges.” Another main reason organizations are making the shift to cloud-based deployments is to benefit from “auto-scaling,” Gupta said. “But the challenge with auto-scaling is that you have to do a lot of guesswork about CPU and memory… and if your intent or requirements change, you must go back to square one and repeat that cycle multiple times,” Gupta said. This is among the reasons why organizations are increasingly rethinking their application-delivery approaches. “This is the time to look at your application-delivery infrastructure and to take a new radical approach to build a new application delivery and security infrastructure,” Gupta said.



Quote for the day:

"It is time for a new generation of leadership to cope with new problems and new opportunities for there is a new world to be won." -- John E Kennedy

Daily Tech Digest - December 10, 2021

App Modernization: Why ‘Lift and Shift’ Isn’t Good Enough

App modernization is about creating a set of best practices and competency building. It’s about continuous learning — which is very attractive for highly recruitable tech workers. Kerry Schaffer is senior director of information technology at OneMagnify; her job includes overseeing data center operations. In 2020, OneMagnify had a very tight customer deadline to deliver a feature for taking reservations for the pre-launch of an iconic vehicle. With microservices hosted by the Tanzu application, Schaffer’s team just had to make a few continuous integration/continuous delivery (CI/CD) deployments. The team delivered on time and the customer got double the reservations it anticipated. “The fact that it was on a scalable platform meant that we were able to serve all the customers without any outages,” Schaffer said. Since then, she added, the same customer has launched four other vehicle reservation systems, and “because we wrote that in a modern way, we’ve been able to reuse all that architecture.”


New research shows IoT and OT innovation is critical to business but comes with significant risks

The Ponemon research shows us that a good percentage of the surveyed respondents are encountering IoT and OT attacks. Nearly 40 percent of respondents told us that they’ve experienced attacks where the IoT and OT devices were either the actual target of the attack (for example, to halt production using human-operated ransomware) or were used to conduct broader attacks (such as lateral movement, evade detection, and persist). Most respondents felt these types of attacks will increase in the years to come. 39 percent of respondents experienced a cyber incident in the past two years where an IoT or OT device was the target of the attack; 35 percent of respondents say in the past two years their organizations experienced a cyber incident where an IoT device was used by an attacker to conduct a broader attack; 63 percent of respondents say the volume of attacks will significantly increase. One thing to keep in mind with these last three statistics is that the study also showed that customers have low to average confidence in their ability to detect when IoT and OT devices have been compromised.


Exploring the paradoxical rise and uncertain future of crypto

Interestingly, crypto investors are open to the idea of greater regulation in the market, for the most part. Based on data from GWI, 46% of crypto investors say they support regulation, and this rises to more than half of consumers who say they already use crypto for transactions. Many investors think regulation will work to normalise the budding digital economy. These optimistic crypto enthusiasts hope that some regulation (emphasis on the “some”) will allow more businesses to accept crypto as payment for goods and services, and put crypto on the same plan as conventional money. However, these same investors also worry that any regulation will severely limit the things they value most about crypto. Over a third of current investors predict regulation will result in more government surveillance and reduce the privacy and anonymity currently guaranteed by crypto. The free and anonymous nature of crypto is often used to paint it as a force democratising finance, but the prospect of regulation makes it clear that this future could be on the chopping block.


"Hello Quantum World:" New cybersecurity service uses entanglement to generate cryptographic keys

The product supports RSA and AES algorithms as well as the post-quantum cryptography algorithms being standardized by the National Institute for Standards and Technology. The service is priced per key generated for customers. Jones said that the company has export controls in place to screen customers who want to use the service. "As part of our customer onboard process, we do due diligence to make sure use cases and destination countries are all above board," he said. Khan described Quantum Origin as a defensive technology as opposed to an adversarial one. "We are focused on protecting the technology that creates the key, not selling it," he said. "We are selling the product created by that technology." Cambridge Quantum will offer the new service to financial services companies and cybersecurity vendors initially and later to telecommunications, energy, manufacturing, defense and governments. ... In a proof-of-concept project, Fujitsu used the service in its software-defined wide area network using quantum-enhanced keys with traditional algorithms. 


How will emerging technologies impact the data storage landscape?

Dependence on technology providers and cloud services based outside of their geographies is an increasing concern for global enterprises. Data sovereignty regulations, such as the Data Governance Act in Europe, are an indication of the acknowledged power of data and its increasing role as the emerging currency for digital transformation. Companies are struggling to keep track of the location of their data and meet compliance with local regulations. This will usher in an industry of local and regional service providers offering sovereign cloud services to captive markets by ensuring the data stays within specified borders. ... Even as public cloud investment continues, enterprises will maintain their corporate on-premises data centre infrastructure for reasons of control, performance and cost-efficiency. This will lead to a new level of sophisticated IT management capabilities to optimise multi-data centre, multi-cloud application and data management solutions. 


Zero Trust Private Networking Rules

SaaS applications and Zero Trust Networking solutions like Cloudflare Access have made it easier to provide a secure experience without a VPN. Administrators are able to configure controls like multi-factor authentication and logging alerts for anomalous logins for each application. Security controls for public-facing applications have far outpaced applications on private networks. However, some applications still require a more traditional private network. Use cases that involve thick clients outside the browser or arbitrary TCP or UDP protocols are still better suited to a connectivity model that lives outside the browser. We heard from customers who were excited to adopt a Zero Trust model, but still needed to support more classic private network use cases. To solve that, we announced the ability to build a private network on our global network. Administrators could build Zero Trust rules around who could reach certain IPs and destinations. End users connected from the same Cloudflare agent that powered their on-ramp to the rest of the Internet. However, one rule was missing.


Natural language processing is shaping intelligent automation

Unstructured information management platforms allow you to automate a lot of research work: for example, lawyers can use them to run intelligent queries over existing patents or case law, and medical researchers can use them in drug discovery or look for relevant gene interactions in the literature. Rather than spending time poring over reams of documents, a human researcher can quickly review the suggestions and insights provided by the UIM platform, making them more productive overall and freeing up their time and mental energy for the more creative and high-level aspects of the job. ... You can use sentiment analysis to perform automatic real-time monitoring of consumer reactions to your brand, especially in response to a new product launch or ad campaign, which will help you to tailor your future products and services accordingly. It can also automatically alert you to any eruptions of criticism or negativity about your brand on social media, without the need for human staff actively monitoring channels 24/7, so that you can respond in time to avert a PR crisis.


Managing Compliance with Continuous Delivery

A typical enterprise application might comprise hundreds of small processes called microservices. Validating the compliance and regulation checks on hundreds of different applications is more manageable than one extensive application. This is because you can easily pin and regulate a noncompliant process during deployment checks. If a microservice isn’t compliant, the team rejects the deployment for that microservice only, not the entire stack. This rejection also alerts the developers responsible for the microservice’s maintenance to ensure compliance in their codebase. Sometimes it’s not technically possible to debug and run the solution locally. For example, if your teams must provision and analyze the logs your app generates, it might not be feasible to run the entire cluster on a developer machine. However, provisioning a test or development environment for every team is expensive in licensing, hardware and staffing. In contrast, with microservices, each team can run their project locally, ensure compliance, and then push it for deployment. 


IT careers: 5 secrets to making a successful change

The fear of being rejected prevents some IT professionals from going after their dreams. But rejection is a fact of life. Failure is always possible when you take risks, so you can’t let that hold you back. Instead, turn your fears into fuel. Before you make a career jump, practice what rejection feels like in small doses. Put yourself in low-risk situations where you can build your muscle for rejection. For instance, if you’re an IT professional just getting started at a new company, offer to perform a planned email migration or server maintenance updates.  ... Think of this as a mirage of uncertainty. Begin a daily practice in which you move beyond the shadow of a doubt. There is a proven power in imagining yourself succeeding in what you’re about to do. If you are doing something new, reframe your inexperience by reminding yourself that you’re not expected to be an expert immediately. Expertise only comes with time. Finally, give yourself the same advice your best friend would give you. This exercise can be a great way to keep you from harboring negative thoughts.


Observability: It’s Not What You Think

Monitoring tells you something is wrong, but it doesn’t tell you why it’s wrong. Monitoring setups also can only monitor things you’ve already thought could be problematic (your ‘known knowns’.) If you didn’t think to instrument the component in question in advance, you can’t monitor it. What’s worse, if you then have a problem there and decide to add monitoring to it, you still don’t have the historical data about how the component performed. Also, monitoring requires special attention before you even know what could go wrong – you have to specifically instrument-specific things and set up specific alerts about them. This takes time and is prone to errors. Also, no matter how well-instrumented your monitoring solution is, it still doesn’t let you explore your business. Looking into ‘unknown unknowns’ isn’t possible with a classic monitoring system, because the data simply doesn’t exist for you to evaluate. Adding in business metrics is generally not supported or poorly supported in traditional monitoring. 



Quote for the day:

"Before you are a leader, success is all about growing yourself. When you become a leader, success is all about growing others" -- Jack Welch

Daily Tech Digest - December 09, 2021

How should we regulate DeFi?

There is opportunity for the appropriate level of regulation to give DeFi enough breathing space to make a difference: boosting transparency, increasing financial inclusion and enabling credit to 8 billion people that will see the world take a tremendous jump toward prosperity. Yet there is also potential for overreach that would stifle innovation and growth and have unintended consequences. Unfortunately, we seem to be well down this path already. What is needed is the realization that DeFi shares many of the same goals as financial regulators: overhauling inflexible processes and delivering wider access, cheaper prices and more stability — all while ensuring these benefits are widely shared with all participants in the market. ... DeFi has the potential to create fairer, more transparent and more liquid markets through completely new mechanisms, helping everyone to reduce fraud and front-running, resolving fragmentation and creating markets that are efficient, resilient, fair and equally accessible to all — not just participants that have the right connections.


How to make agile actually work for analytics

The most striking difference between what we do and what software developers do is in our end products. In software, the goal is to get to a product that the end-user loves. In data, our goal is to help people make a decision they trust, and the journey the user takes to get there can be just as important as the end result. Most commonly we see this manifested in how we tell stories with our data. We use notebooks to capture context and process, and presentations to guide users to an understanding. It’s in this process that we establish trust, turn charts into insights, and make our data valuable. This is also the driver behind one of the greatest pains of our work: the follow-up questions, and ad-hoc requests. These questions and requests come from a place of curiosity and represent a desire to have that same intimate understanding of data that we get in crafted data stories. And yet, in practice, we try to eliminate these questions with processes that front-load requirements gathering and tools that have made no room for this way of working.


Cloudentity SaaS platform enables zero trust access control for APIs

Deployable in minutes, Cloudentity empowers businesses to deliver Open Banking, Embedded Finance and other innovative online services without changing identity providers or application code. Cloudentity delivers a declarative identity and authorization framework that works across any cloud to simplify access control and data governance. From Open Banking to eCommerce fraud prevention, Cloudentity makes it easier to deliver cloud-native applications and safer to extend your data to the customers and partners that matter most. A standout capability of the new SaaS platform is its drag and drop Data Lineage feature, which provides a simple and intuitive way of mapping identity and user context data to an application. For developers, Data Lineage solves the complexities of Single Sign On (SSO) and provides real-time control over who can access each element of your API data. For ITops, DevOps and SecurityOps, teams can rapidly validate controls and pinpoint areas that need to be updated or fixed to prevent API data leakage and meet personal data protection obligations.


SaaS DR/BC: If You Think Cloud Data is Forever, Think Again.

Humans and technology have always had co-dependent challenges. Let’s face it, it’s one of the main reasons my career exists! So it stands to reason that human inference, whether deliberate or not, is a common reason for losing information. This can be as innocuous as uploading a CSV file that corrupts data sets, accidentally deleting product listings, or overwriting code repositories with a forced push. There’s also intentional human interference. This means someone who has authorized access, nuking a bunch of stuff. It may sound far-fetched but we have seen terminated employees or third-party contractors cause major issues. It’s not very common, but it happens. Cyberthreats are next on the list, which are all issues that most technical operations teams are used to. Most of my peers are aware that the level of attacks increased during the global pandemic, but the rate of attacks had already been increasing prior to COVID-19. Ransomware, phishing, DDoS, and more are all being used to target and disrupt business operations. If this happens, data can be compromised or completely wiped out.


Starting an SRE Team? Stay Away From Uptime.

Why shouldn't you be too concerned about your uptime metrics? In reality SRE can mean different things to different teams but at its core, it’s about making sure your service is reliable. After all, it’s right there in the name. Because of this many people assume that uptime is the most valuable metric for SRE teams. That is flawed logic. For instance, an app can be “up” but if it’s incredibly slow or its users don’t find it to be practically useful, then the app might as well be down. Simply keeping the lights on isn’t good enough and uptime alone doesn’t take into account things like degradation or if your site’s pages aren’t loading. It may sound counterintuitive, but SRE teams are in the customer service business. Customer happiness is the most important metric to pay attention to. If your service is running well and your customers are happy, then your SRE team is doing a good job. If your service is up and your customers aren’t happy, then your SRE team needs to reevaluate. A more holistic approach is to view your service in terms of health.
 

An opportunity is coming to drive up the number of women in tech

Another key element is creating the right culture and environment for diversity to thrive. In a gender context, an important aspect here is male allyship. Men have a real role to play in supporting the ‘levelling up’ agenda. They need to see that increasing gender diversity and equity is not just an issue for women themselves – it’s for everyone. They can become active allies through their own behaviours and actions. This extends right up to board level and executive leadership. We need to continue to work to influence leader behaviour and build their understanding of people’s different styles. Instances of men talking over women in the boardroom or not listening to ideas are still all too common. Reporting is also critical. You can’t change what you don’t measure. Collating diversity statistics and reporting them to the board and more widely around the business is an essential part of raising awareness and stimulating action. Transparent reporting was in fact seen as the most effective lever for improving diversity and inclusion in this year’s survey.


Is the “great resignation” coming for you?

When employees feel their personal ambitions are too difficult to achieve, they start to think about leaving. Those ambitions might involve having a family while maintaining a career, gaining a range of professional experiences, or even accumulating personal experiences such as travel. People will ask: “I don’t mind making sacrifices, but are the trade-offs producing the benefits I expected?” When that question surfaces, employees are already halfway out the door. For example, young men and women who are working extremely hard and don’t have time for friends, exercise, or adventures may start to doubt that the company is the right place for them—even if the pay is fabulous. ... Managers often show great care about performance and little concern about the whole person who is delivering the results. Feeling uncared for is deadly for motivation and destructive to performance over the long run. Many managers rarely ask about other aspects of their team members’ lives, their personal interests, or their ambitions. Too few managers show genuine understanding and appreciation for what it took to deliver such great results.


DevSecOps jobs: 3 ways to get hired

Automation is a major part of DevSecOps, and this requires the use of multiple software applications and tools. For example, companies use a variety of different application security testing tools (ASTs), which are essential to ensure that the code being used in development is safe and to prevent malicious packages from being introduced. These tools can be static (SAST), dynamic (DAST), and interactive (IAST) and they can also be from different vendors. Some may include automated vulnerability detection, prioritization, and even remediation capabilities that can address issues without requiring IT staff to spend much time researching vulnerabilities. The lesson: Many different tools are used in DevSecOps, and these will likely change as new innovations are introduced. Stay informed and updated on industry trends, especially if you are early in your journey because the tools and needs of today might be very different in a few years’ time. The idea behind shifting left and DevSecOps is to break down the traditional separation between developers, security, and IT professionals.


Google TAG Disrupts Blockchain-Enabled Botnet

Google is skeptical about the complete disruption of Glupteba's operations. It says: "The operators of Glupteba are likely to attempt to regain control of the botnet using a backup command and control mechanism that uses data encoded on the Bitcoin blockchain." The botnet also has a feature that allows it to evade traditional takedowns. TAG says that a conventional botnet-infected device looks for predetermined domain addresses that point to the C2 server. The instructions to locate these domains are hard-coded in the malware installed on the victim's device. If the predetermined domains are taken down by law enforcement agencies or others, the infected devices can no longer receive instructions from the C2 servers and therefore can no longer be operated by the bot controller. The Glupteba botnet, however, does not rely solely on predetermined domains to ensure its survival, the TAG researchers. They say that when the botnet’s C2 server is interrupted, Glupteba malware is hard-coded to search the public Bitcoin blockchain for transactions involving three specific Bitcoin addresses that are controlled by the Glupteba botnet operators.


You’re Doing it Wrong: It’s Not About Data and Applications – It’s About Processes

We often model processes to document them, to validate them with stakeholders, to teach them to others – and most of all, to improve them. In far too many companies, what they do and why they do it is implicit, not communicated well, and invites plenty of competing points of view as to what it really is. You need to tackle the process first before you attempt to automate any of its tasks. Not doing so would be like digging holes with a crane instead of a shovel, but without thinking about whether the holes are being dug in the right places (or should be dug at all). It’s not enough to think about saving time and money. Automating a process (not just its activity) documents it, makes it teachable and scalable, and goes a long way to reducing or eliminating mistakes (high profile errors can be a major catalyst for process automation). It also makes a process easily audited and monitored And it’s a lot easier to figure out how to improve a process you can see. And improvement is a must; if there’s one thing to expect when it comes to process automation, it’s change.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - December 08, 2021

Entrepreneurship for Engineers: Making Open Source Pay

Things shift as the application is deployed and scaled. At that point, the fact that something is open source quickly becomes irrelevant. Instead, engineers — and business leaders — care about things like reliability and security. And they are willing to pay for it. If an open source project is geared primarily towards the “build” phase and is either less visible or less valuable at the deploy and scale phase, it will be hard to monetize, no matter how popular it is. Similarly, it’s always easier to monetize a project that provides a mission-critical capability, something that would directly impact users’ revenue if it didn’t work. A project that facilitates payments is going to be very easy to monetize, but a project that makes the fonts on a webpage particularly beautiful has an uphill battle. As an example, Fanelli pointed to Temporal, an open source microservice orchestration platform started by the creator of the Cadence project, which was developed by Uber to ensure that jobs don’t fail and was used for things like ensuring that payments are processed.


The checklist before offering and accepting a job especially for IT Industry

Avoid using fail-fast strategy on employees: Sometimes organizations hire buffer candidates and if they don’t meet the expectations, they are asked to leave. Instead, candidates must be assessed thoroughly and made sure they are given enough time to perform; Short notice period: 4-6 weeks of notice period is a good time for knowledge transition. Also, during notice period, many employees are non-productive as they spend time to complete HR formalities; Right references: If candidates work with a company for longer duration, they build good references. There is no point having references if candidates have spent less than one year in a company; Faster onboarding: Most of time the onboarding process is very long with many steps involved. It starts with HR onboarding, followed by practice/BU and project team. It is good to show a collaborative approach while onboarding candidates. But it is also important to have quick discussions with relevant teams


Personal Data Protection Bill: 4 Reasons Why Governments Bat for Data Localisation

First, it makes the personal data of resident data principals vulnerable to foreign surveillance because arguably governments, in whose jurisdictions such servers are located, will have better access to the data. Second, storage and transference of personal data of resident data principals to jurisdictions with lax data protection laws also makes their data vulnerable. Third, it reduces the access of the domestic government of the data principals to this data thereby interfering with the discharge of their regulatory and law enforcement functions, including counter-terrorism and prevention of cyber attacks and cyber offences. This is because requests for such information are either denied citing law of the foreign country or its provisioning is often delayed given the inefficacious and time consuming MLAT (Mutual Legal Assistance Treaty) processes. Fourth, it leads to missed opportunities for the domestic industry that would otherwise be engaged in the provisioning of storage services in terms of foreign direct investment, creation of digital infrastructure and development of skilled personnel.


The DARL Language and its Online Fuzzy Logic Expert System Engine

DARL is an attempt to drag experts systems into the 21st century. DARL was initially created as a solution to a problem that still exists today in Machine Learning: how do you audit a trained Neural network? I.e. if you use Machine Learning to create a model that you use in a real world example, how do you ensure it doesn't accidentally do something bad, like identify the wrong person as a potential terrorist, or deny a loan to a minority group? Neural networks and other similar techniques produce models that are "black boxes". The answer the designer of DARL found was to use Fuzzy Logic Rules as the model representation mechanism. Algorithms exist to perform Supervised, Unsupervised and Reinforcement learning to these rules. DARL grew out of that. Initially, the models were coded in XML, but later a fully fledged language was created so that all the usual tools like editors, interpreters, etc. could be used with the models. The rules are very easy to understand, so auditing them for unexpected effects is simple.
 

CI/CD Is Still All About Open Source

Jenkins was a CI tool at heart and later morphed into a CI/CD tool. Many people think that this fork in the road may have hurt the continued evolution of continuous delivery in the long term. But that is an argument for another DevOps.com article (or maybe even a panel discussion at an upcoming DevOps live event). Regardless of where you stand on that issue, as an open source project, it is hard to argue with the success of Jenkins. Driving a lot of that success is the Jenkins plug-in architecture. There are literally thousands of plugins that allow Jenkins to work with just about anything. That is the engine that powered Jenkins, yes; but its secret superpower was and is open source. That said, Jenkins has grown a bit long in the tooth over the years. It’s not that it doesn’t do what it always did, it’s that what we do and how we do it has changed. Microservices, Kubernetes and even cloud have changed the very fabric of the tapestry in front of which Jenkins sits. The open source community that supports Jenkins should receive enormous credit here: It has tried mightily to keep up with the many changes.


The threats of modern application architecture are closer than they appear

Shift left approaches begin to yield vague and general results with the developer writing the first line of code, and vulnerabilities can be caught as early as possible. On the other hand, shift right aligns with where vulnerabilities are detected closer to the full deployment of the software, sometimes only in production runtime. Shifting toward the right is usually the easier approach, as it provides results that are more accurate and actionable, enabling developers to run the code and then find the mistakes, but it isn’t always the desirable choice, as many times the detection is simply too late. That means the fixes are harder, costlier, and in worst-case scenarios, your organization could already have been exposed to any given vulnerability. On the other hand, shift left enables developers to see the security testing results as early as possible, saving both time and money for IT teams in the long run. The key to conquering this tension is fostering a painless testing methodology that can be envisioned as “one platform to rule them all.”

 

The defensive power of diversity in cybersecurity

As with many things in technology, new disruptive ways of thinking are required to address the problem. There is a need to instill platforms, funding, policies and processes that diversify the talent pool in cybersecurity, opening it up to as wide a range of backgrounds as possible. Intelligence and law enforcement agencies are leading the way, keen to reclaim the edge from attackers. What started with the FBI grappling with whether to hire hackers who smoke cannabis in 2014 has turned into more formalized programs with open arms to diversity. Organizations such as GCHQ, the U.K.’s signals intelligence agency, are leading the way by actively hiring neurodiverse individuals for their unique ability to spot patterns in data. As with anything in cyber, what starts in intelligence agencies has a knack of achieving mainstream adoption with those defending large corporations. Those in cybersecurity need to recognize that diversity is about more than just equality. It is about optimizing defensive capabilities by having access to the widest possible range of problem-solving abilities.


How CEOs can pass the cybersecurity leadership test

The first order of business for CEOs is connecting the organization’s mission to the security of data, assets, and people. CEOs can do this by articulating an unambiguous foundational principle that establishes security and privacy as operational goals and business imperatives. Aflac, the largest provider of supplemental insurance at the workplace in the United States, has positioned cybersecurity at the center of who they are and what they do as a company. “We are one of the few insurance companies that measures ourselves on how fast we pay,” Aflac CISO Tim Callahan says. “Our operational managers are held to a standard of paying our claims fast. Dan Amos, our chairman and CEO, has never lost sight of who our customers are, and how much trust they have in us, and how we’re there for them during their time of need. That extends to protecting their information. He understands what the lack of cyber protection can do to our brand, to our customers, to our reputation. If the CEO were not passionate about that, then there’s a bigger problem.”


Why Cloud Native Systems Demand a Zero Trust Approach

In the past, when organizations relied on their own private, often on-premises, data centers — and workers usually came to a physical office to do their jobs — security experts considered data and workloads to have a definable “perimeter” that needed to be defended. Bad actors, human or machine, were denied access to the network the way invaders were repelled from a castle: by building a (virtual) moat around it. Hence the use of authentication and authorization via individual logins and passwords. The architects who designed these systems assumed entities inside an organization could be trusted, and that users’ identities were not compromised. But that castle-and-moat approach is widely considered to be unreliable today. Not only is there no single “castle” to defend — but chances are, there’s already someone or something in your castle that shouldn’t be there. A Zero Trust approach makes the assumption that, as the horror movie tagline goes, the call is coming from inside the house. It assumes that someone or something that shouldn’t be there may already be on your network.


How financial services companies are gaining value from cloud adoption

For cloud adoption to be successful, buy-in is required from the workforce and leadership. This is key to aligning tech investment and deployment with clear business goals, but a deep understanding of the strategic implication of cloud migration among C-suite and board members can sometimes be absent. Business leaders often believe it is the full responsibility of the CTO, but the discussion must go both ways, and therefore, there is a gap to be bridged between business and IT to ensure that both sides are on the same page.“It’s easy to forget that you need a case for change, and to overlook alignment of any staff member in charge of a team,” said Mould. “The leadership team also need to consider how they put the organisation across as an attractive place for talent to help them with the cloud migration. “The alternative is to outsource a capability that won’t be invested in internally, but a big part of this adoption is thinking differently about the brain drain, and look at creating an internal capability.”



Quote for the day: 

"Leaders should influence others in such a way that it builds people up, encourages and edifies them so they can duplicate this attitude in others." -- Bob Goshen

Daily Tech Digest - December 07, 2021

Why 2022 will be the year of data sovereignty cloud

Governments around the world are facing pressure to enact more comprehensive data privacy legislation, in response to increasing consumer concerns about how personal data and digital activity is being stored and used. It’s particularly notable when it comes to the cloud because a business can store its data in any number of different geographic regions regardless of where the company itself might be based – and if they’re using public cloud providers, they might not even know where their data is physically being stored. This is where questions of cloud data sovereignty – the concept that data stored in the cloud is subject to the laws and regulations of the country that has jurisdiction of the physical servers and premises being used – becomes far more relevant. The world of data protection had a big wake-up call when the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) were passed. These two landmark pieces of legislation aimed to bring some degree of consistency around the collection and use of personally identifiable information – for one of the world’s biggest trading blocs and the US’ most populous state respectively.


5 cybersecurity myths that are compromising your data

There are still two long-held misconceptions around passwords. The first is that adding capital letters, numbers or special characters to your one-word password will make it uncrackable. This myth is perpetuated by a lot of business accounts which have these requirements. However, the real measure of password security is length. Software can crack short passwords, no matter how "complex", in a matter of days. But the longer a password is, the more time it takes to crack. The recommendation is using a memorable phrase -- from a book or song, for example -- that doesn’t include special characters. But determining a strong, (almost certainly) uncrackable password is only the first step. If the service you’re using is hacked and criminals gain access to your password, you’re still vulnerable. That’s where two-factor authentication (2FA) and multi-factor authentication (MFA) come in. These methods require you to set up an extra verification step. When you log in, you’ll be prompted to enter a security code which will be sent to your phone or even accessed via a dedicated verification app.


How to protect air-gapped networks from malicious frameworks

Discovering and analyzing this type of framework poses unique challenges as sometimes there are multiple components that all have to be analyzed together in order to have the complete picture of how the attacks are really being carried out. Using the knowledge made public by more than 10 different organizations over the years, and some ad hoc analysis to clarify or confirm some technical details, researchers put the frameworks in perspective to see what history could teach cybersecurity professionals and, to a certain extent, even the wider public about improving air-gapped network security and our abilities to detect and mitigate future attacks. They have revisited each framework known to date, comparing them side by side in an exhaustive study that reveals several major similarities, even within those produced 15 years apart. “Unfortunately, threat groups have managed to find sneaky ways to target these systems. As air-gapping becomes more widespread, and organizations are integrating more innovative ways to protect their systems, cyber-attackers are equally honing their skills to identify new vulnerabilities to exploit,” says Alexis Dorais-Joncas


5 DevOps Concepts You Need to Know

Continuous Integration (CI) and Continuous Delivery (CD) are fundamental DevOps concepts. They enable developers to manage their work and merge their changes into a central repository (or version control system), and release continuously. If you go back to the core DevOps principles, it’s all about achieving the best collaboration, whether or not you’re working on the same functions of classes, triggers, layouts, etc. Think of your worst ‘version control’ nightmares dissipating because of CI/CD. But watch out for the major misconception that this is achieved purely from ‘tooling’. After all, you can’t buy tools and simply expect them to fix your problems – if you buy a drill, the shelves don’t go up on their own! First, you must understand the process (how to level the boards, where to use wall anchors, and so on.). In our developer world, it’s important to understand the tools and the processes that come along with it. Similarly, CI/CD tools won’t fix your problems if you don’t have the right process in place (such as a branch management strategy or environment strategy).


Are You Guilty of These 8 Network-Security Bad Practices?

With many people still working from home, the lines between work life and personal life have become blurred. Sometimes, it’s just easier to use a personal email account or computer for communicating with colleagues. But this can dramatically increase the risk of a phishing attack aimed at credential harvesting or malware distribution, which can turn your home computer or business laptop into a vector for malware infecting many other users—including work colleagues. Once in your company’s email server, it’s free to access critical data assets. ... Security-conscious companies wisely limit access to websites via the corporate network. But when working from home, all bets are off. So, your child might borrow your company laptop to visit a gaming or education site with weak security—or, worse yet, a malicious site that appears legitimate — potentially delivering malicious JavaScript which gains entry to your corporate network the next time you log in. The loosely collected cybercrime syndicate known as Magecart has elevated malicious JavaScript to an art, skimming credit-card information and login credentials from websites. 


All About ‘Bank Python,’ a Finance-Specific Language Fork

Bank Python implementations also seem to be using their own proprietary data structure for tables, offering faster access to medium-sized datasets (while storing them more efficiently in memory). “Some implementations are lumps of C++ (not atypical of financial software) and some are thin veneers over sqlite3,” Paterson said. (His friend Salim Fadhley, a London-based developer, has even released an (all-Python) version of the table data structure called eztable.) Paterson concludes that while most programming has a code-first approach, Bank Python would be characterized as data-first. While it’s ostensibly object-oriented, “you group the data into tables and then the code lives separately.” Needless to say, Bank Python inevitably ends up getting its own internal integrated development environment (IDE) to handle all of its unique configuration quirks, and it even has its own unique version-control system for code. Paterson acknowledged the uncharitable assessment that it’s all just a grand exercise in distrusting anything that originated outside the company.


TSA Issues New Cybersecurity Requirements for Rail Sector

TSA also released guidance recommending that lower-risk surface transportation owners and operators voluntarily implement the same measures. "We have not witnessed a rail industry event on the level of Colonial Pipeline, but a ransomware disruption, let alone a targeted attack, is a plausible scenario," says John Dickson, vice president of the cloud security firm Coalfire, which provides services to DHS and other federal agencies. He says that without "a regulatory nudge," the rail industry, particularly the freight portion, is not likely to improve its cybersecurity hygiene on its own. Other experts say TSA could get overwhelmed in reporting what they call noise. "At a high level, the directives seem completely reasonable, but as always, the devil is in the details," says Jake Williams, a former member of the NSA's elite hacking team. "Taken at face value, railway operators would have to report every piece of commodity malware that is discovered in the environment, even if antivirus or EDR prevented that malware from ever executing."


Russian Actors Behind SolarWinds Attack Hit Global Business & Government Targets

In at least one case, the attacker compromised a local VPN account, then used it to conduct recon and gain access to internal resources in the victim CSP's environment. This allowed them to compromise internal domain accounts. In another campaign, attackers were able to access a victim's Microsoft 365 environment using a stolen session token. It was later discovered some systems had been infected with info-stealer Cryptbot before the token was generated. Other techniques include the compromise of a Microsoft Azure AD account within a CSP's tenant in one attack; in another, attackers used RDP to pivot between systems that had limited Internet access. The attackers compromised privileged accounts and used SMB, remote WMI, remote scheduled tasks registration, and PowerShell to execute commands in target networks. Attackers are also making use of a new bespoke downloader dubbed Ceeloader, which decrypts a shellcode payload to execute in memory on a target device.


Automation strategy: 6 key elements

Ad hoc automation tends to occur independently of other efforts. Even if it solves a problem at hand, there are unclear (if any) links to how that aligns with broader goals. While that might be fine to some extent, it can also breed silos, cultural resistance, and other potential issues. Strategic automation can be both incremental and well-connected to the big picture. “While there are many questions a CIO will have along the way when deciding their automation strategy, the single most important question they should ask themselves is: ‘How will automation help my organization achieve the business outcomes we need to get to where we want to be in 4-5 years?’” Becky Trevino, VP of operations at Snow Software told us. Trevino notes that a “yes-no” matrix can help guide decision-making and prioritization, as in: “Does automating this help us achieve X?” If the answer is yes, then you do it. If the answer is no or maybe, then you should at minimum be asking deeper questions about why you’re doing it.


How consumers will see banks embrace AI in 2022

What does “genuinely personalised” banking look like? To answer that, we should compare these challenger banks with “business as usual” in the sector. Currently, most traditional banks still treat their online accounts as a digital version of a traditional balance statement. The odds are that your bank’s online account only provides a simple, itemised list of your ingoings and outgoings. If you want to calculate how much you spend, how you allocate that spending, set a realistic budget for next month, or estimate how much you might be able to save in an average month, it’s often the case that you simply will have to trawl through your statement yourself and do the hard calculations. Want to easily see how much goes out on your subscription services or other automatic charges versus incidental spending, and perhaps manage some of those financial commitments? The data is all there, but has often yet to be transformed into easy-to-understand interfaces that can help consumers or small business owners get their finances under control. This ends up being burdensome for people. And it’s also quite unnecessary.



Quote for the day:

"Leadership happens at every level of the organization and no one can shirk from this responsibility." -- Jerry Junkins