Daily Tech Digest - June 05, 2022

How the Web3 stack will automate the enterprise

Web3 is only partially in existence within enterprises but is already making an incredible impact and altering strategies. Cross River Bank, which just raised $620 million at a $3 billion valuation, powers embedded payments, cards, lending, and crypto solutions for over 80 leading technology partners. Cross River CEO Giles Gade’s plan is to start offering more crypto-related products and services, gearing towards a crypto-first strategy. Investors are excited by the opportunity. “As Web3 continues to gain mindshare of consumers and businesses alike, we believe Cross River sits in a unique position to serve as the infrastructure and interconnective tissue between the traditional and regulated centralized financial system, as it transitions slowly to a decentralized one,” said Lior Prosor, General Partner and Co-founder of Hanaco Ventures in the Cross River press release. In many ways, this time is no different than when financial institutions and VCs saw the disruptive potential by investing in FinTech innovation – analog to digital – years prior. If FinTech is the blending of technology and finance, Web3 is the merging of crypto with the web.

Demystifying the Metrics Store and Semantic Layer

First, many critical data assets end up isolated on local servers, data centers and cloud services. Unifying them poses a significant challenge. Often, there are also no standardized data and business definitions, and this adds to the difficulty for businesses to tap into the full value of their data. As companies embark on new data management projects, they need to address these concerns; however, many have chosen to avoid this issue for one reason or another. This results in new data silos across the business. Second, as every data warehouse practitioner is aware, it’s difficult for most business users to interpret the data in the warehouse. Because technical metadata like table names, column names and data types are typically worthless to business users, data warehouses aren’t enough when it comes to allowing users to conduct analysis on their own. From a business user’s perspective, what can be done to solve this problem? Two popular solutions are metrics stores and semantic layers, but which is the best approach? And what’s the difference between them?


Why HR plays an important role in preventing cyber attacks

HR staff members often work with legal counsel on security policies, including the creation, maintenance and enforcement of acceptable usage policies. Since HR staff communicates frequently with employees, they are well positioned to share information about security and privacy expectations and often already work to keep security topics top-of-mind for employees. ... As with security policy work, HR professionals are often a valuable part of compliance-related initiatives because certain aspects of state, federal and international privacy and security compliance regulations require HR expertise. This is particularly true for larger organizations that have office locations or employees in multiple countries. HR may work on the creation of processes including user onboarding and offboarding, security awareness and training, and the steps for incident response once a crisis occurs. ... Some HR professionals already serve on their IT and security governance committee, as it's only natural that HR should help get the word out on security and assist with policy creation and administration when needed.


7 Reasons Why Serverless Encourages Useful Engineering Practices

They are easier to change. After reading the book “The Pragmatic Programmer”, I realized that making your software easy to change is THE de-facto principle to live by as an IT professional. For instance, when you leverage functional programming with pure (ideally idempotent) functions, you always know what to expect as input and output. Thus, modifying your code is simple. If written properly, serverless functions encourage code that is easy to change and stateless. They are easier to deploy — if the changes you made to an individual service don’t affect other components, redeploying a single function or container should not disrupt other parts of your architecture. This is one reason why many decide to split their Git repositories from a “monorepo” to one repository per service. With serverless, you are literally forced to make your components small. For instance, you cannot run any long-running processes with AWS Lambda (at least for now). At the time of writing, the maximum timeout configuration doesn’t allow for any process that takes longer than 15 minutes. 



WTF is a Service Mesh?

The internal workings of a Service Mesh are conceptually fairly simple: every microservice is accompanied by its own local HTTP proxy. These proxies perform all the advanced functions that define a Service Mesh (think about the kind of features offered by a reverse proxy or API Gateway). However, with a Service Mesh this is distributed between the microservices—in their individual proxies—rather than being centralised. In a Kubernetes environment these proxies can be automatically injected into Pods, and can transparently intercept all of the microservices’ traffic; no changes to the applications or their Deployment YAMLs (in the Kubernetes sense of the term) are needed. These proxies, running alongside the application code, are called sidecars. These proxies form the data plane of the Service Mesh, the layer through which the data—the HTTP requests and responses—flow. This is only half of the puzzle though: for these proxies to do what we want they all need complex and individual configuration. Hence a Service Mesh has a second part, a control plane.


Best Practices for Deploying Language Models

We’re recommending several key principles to help providers of large language models (LLMs) mitigate the risks of this technology in order to achieve its full promise to augment human capabilities. While these principles were developed specifically based on our experience with providing LLMs through an API, we hope they will be useful regardless of release strategy (such as open-sourcing or use within a company). We expect these recommendations to change significantly over time because the commercial uses of LLMs and accompanying safety considerations are new and evolving. We are actively learning about and addressing LLM limitations and avenues for misuse, and will update these principles and practices in collaboration with the broader community over time. We’re sharing these principles in hopes that other LLM providers may learn from and adopt them, and to advance public discussion on LLM development and deployment.


A cybersecurity expert explains why it would be so hard to obscure phone data in a post-Roe world

There’s not a whole lot users can do to protect themselves. Communications metadata and device telemetry – information from the phone sensors – are used to send, deliver and display content. Not including them is usually not possible. And unlike the search terms or map locations you consciously provide, metadata and telemetry are sent without you even seeing it. Providing consent isn’t plausible. There’s too much of this data, and it’s too complicated to decide each case. Each application you use – video, chat, web surfing, email – uses metadata and telemetry differently. Providing truly informed consent that you know what information you’re providing and for what use is effectively impossible. If you use your mobile phone for anything other than a paperweight, your visit to the cannabis dispensary and your personality – how extroverted you are or whether you’re likely to be on the outs with family since the 2016 election – can be learned from metadata and telemetry and shared.


Three Architectures That Could Power The Robotic Age With Autonomous Machine Computing

Similar to other information technology stacks, the autonomous machine computing technology stack consists of hardware, systems software and application software. Sitting in the middle of this technology stack is computer architecture, which defines the core abstraction between hardware and software. The existence of this abstraction layer allows software developers to focus on optimizing the software to fully utilize the underlying hardware to develop better applications as well as to achieve higher performance and higher energy efficiency. This abstraction layer also allows hardware developers to focus on developing faster, more affordable, more energy-efficient hardware that can unlock the imagination of software developers. ... Hence, computer architecture is essential to information technology. For instance, in the personal computing era, x86 has become the dominant computer architecture due to its superior performance. In the mobile computing era, ARM has become the dominant computer architecture due to its superior energy efficiency. 


Datadog finds serverless computing is going mainstream

Serverless represents the ideal state of cloud computing, where you only use exactly what resources you need and no more. That’s because the cloud provider delivers only those resources when a specific event happens and shuts it down when the event is over. It’s not a lack of servers, so much as not having to deploy the servers because the provider handles that for you in an automated fashion. When people began talking about cloud computing around 2008, one of the advantages was elastic computing, or only using what you need, scaling up or down as necessary. In reality, developers don’t know what they’ll need, so they’ll often overprovision to make sure the application stays up and running. The company created the report based on data running through its monitoring service. While it represents only the activity from its customers, Rabinovitch sees it as quality data given the broad range of customers it has using its services. “We do think we’re well represented across the industry, and we believe that we’re representative of real production workloads,” he said.


How Platform Engineering Helps Manage Innovation Responsibly

Platform engineering, then, is a support function. If it enables, it does so by reducing complexity and making it easier for developers and other technical teams to achieve their objectives. Moreover, one of the advantages of having a platform engineering team is that it can balance competing needs and aims — like, for example, developer experience and security — in a way that ensures engineering capabilities and commercial imperatives are properly aligned. Calling it a “support function” might not sound particularly sexy, but it nevertheless suggests that organizations are maturing in their approach to software development. It’s no longer the locus of moving fast and breaking things, but instead recognized as something that requires care and stewardship. But this implies responsibility — and that, to invert the old adage, carries considerable power. This means that platform engineering can become a political beast within organizations. If it can shape the way developers work, it can inevitably play a part in the direction of a whole technology strategy.



Quote for the day:

"Leadership is developed daily, not in a day." -- John C. Maxwell

No comments:

Post a Comment