Daily Tech Digest - July 31, 2022

Best practices for recovering a Microsoft network after an incident

Often a recovery process is different for different sized organizations. A small business might just want to be back functional as soon as possible while a medium-sized business might take the time to do a root cause analysis. According to the NIST document, “Identifying the root cause(s) of a cyber event is important to planning the best response, containment, and recovery actions. While knowing the full root cause is always desirable, adversaries are incentivized to hide their methods, so discovering the full root cause is not always achievable.” If you use Microsoft Defender for Business Server, Microsoft recommends proactive adjustments to your server to ensure that you can best prevent attacks, specifically that you use the same attack surface reduction rules recommendation for workstations. One example from the screen image below, you want to block all office applications from creating client processes. As you rebuild your network after an incident, remember these settings as you often redeploy servers with default settings. You might not have remembered or documented all your settings that you need to do to better protect your network.

How AI Is Transforming The Future Of The Finance Industry

Since the entire foundation of AI is learning from past data, it only seems sensible that AI would flourish in the financial services industry, where keeping books and records is a given for businesses. Consider the use of credit cards as an example. Today, we utilize credit scores to determine who is and is not eligible for credit cards. However, it is not always advantageous for businesses to divide people into “haves” and “have-nots.” Instead, information about a person’s loan repayment patterns, the number of loans that are still open, the number of credit cards that person has already, etc. can be used to tailor the interest rate on a card so that the financial institution issuing the card feels more comfortable with it. ... When it comes to security and fraud detection, AI is on top. It can leverage historical spending patterns across various transaction instruments to unexpected flag activity, such as using a foreign card shortly after it has been used elsewhere or an effort to withdraw money in an unusual amount for the account in the issue. The system has no qualms about learning, which is another excellent aspect of AI fraud detection. 

Meta faces new FTC lawsuit for VR company acquisition

On Wednesday, the FTC sued Meta in an attempt to block its acquisition of virtual reality technology company Within Unlimited and its VR fitness app Supernatural. The FTC in a press release said Meta's "virtual reality empire" already includes a virtual reality fitness app and alleged Meta is attempting to "buy its way to the top." "Meta already owns a best-selling virtual reality fitness app, and it had the capabilities to compete even more closely with Within's popular Supernatural app," John Newman, FTC Bureau of Competition deputy director, said in the release. "But Meta chose to buy market position instead of earning it on the merits. This is an illegal acquisition, and we will pursue all appropriate relief." According to Meta's statement in response to the lawsuit, the FTC's case is "based on ideology and speculation, not evidence." ... It's not the first time the FTC has accused Meta of buying out the competition. In an ongoing lawsuit against the company, the FTC alleged that Meta's previous Instagram and WhatsApp acquisitions served to kill what the company viewed as competition to its popular social media site Facebook.

Blockchain Applications That Make Sense Right Now

Using blockchain technology, users can create digital assets that are verifiable, scarce and portable – the core properties of a “token”. The owner of a blockchain-generated token can be sure they are the only owner of a limited quantity item. These are tokens that have genuine value as they solve the ownership challenge digital assets have had since day one. This is the core building block that is not possible in Web2’s centralized world. Yes, Fortnite can sell you virtual clothing, and a bank can show a deposit in your account, but those assets are at the behest of the central actors. Creators are similarly beholden to the platforms that distribute their products, and their popularity is governed by their algorithms and business interests of those centralized platforms. With blockchain, new types of assets can be created that hold value outside any centralized platform (even though those platforms may still be very relevant for creation, display and distribution). No centralized actor can unilaterally change the ownership of an asset ‘on-chain’, and rules are visible in public code. Ownership can be ascertained with certainty for all who examine it.

Yes, you are being watched, even if no one is looking for you

Whether or not you pass under the gaze of a surveillance camera or license plate reader, you are tracked by your mobile phone. GPS tells weather apps or maps your location, Wi-Fi uses your location, and cell-tower triangulation tracks your phone. Bluetooth can identify and track your smartphone, and not just for COVID-19 contact tracing, Apple’s “Find My” service, or to connect headphones. People volunteer their locations for ride-sharing or for games like Pokemon Go or Ingress, but apps can also collect and share location without your knowledge. Many late-model cars feature telematics that track locations–for example, OnStar or Bluelink. All this makes opting out impractical. The same thing is true online. Most websites feature ad trackers and third-party cookies, which are stored in your browser whenever you visit a site. They identify you when you visit other sites so advertisers can follow you around. Some websites also use key logging, which monitors what you type into a page before hitting submit. Similarly, session recording monitors mouse movements, clicks, scrolling and typing, even if you don’t click “submit.”

How remote work disrupted global supply chains

To make matters worse, many U.S.-based distributors and retailers decided to bulk up their inventories to hedge against shortages. The surge of e-commerce contributed to the disruptive spiral by making two-day shipping a necessity. In addition, the resulting shortage of warehouse space worsened bottlenecks by pushing supplies back to shipping docks and freight terminals. While remote work isn’t entirely to blame for the supply chain crisis, it clearly kicked off a sequence of events that took on a life of its own. Zoom, Google Docs, and Amazon undermined the assumption that history would repeat itself. When is it all going to end? Experts disagree. Most say things will probably improve for the rest of this year and return to something close to normal by the end of 2023. But even the Federal Reserve Bank of Cleveland recently admitted that the sources it relies upon for intelligence “are mostly based on hope rather than on concrete evidence.” In the meantime, the crisis has also cast the spotlight on the delicate interconnections that hold the world’s supply lines together and the effects that minor disruptions at the far end of the chain can have further upstream.

Network security depends on two foundations you probably don’t have

Any use of the network creates traffic and traffic patterns. Malware that’s probing for vulnerabilities is an application, and it also generates a traffic pattern. If AI/ML can monitor traffic patterns, it can pick out a malware probe from normal application access. Even if malware infects a user with the right to access a set of applications, it’s unlikely the malware would be able to duplicate the traffic pattern that user generated with legitimate access. Thus, AI/ML could detect a difference, and create an alert. That alert, like a journal alert on unauthorized connections, would then be followed up to validate the state of the user’s device security. The advantage of the AI/ML traffic pattern analysis is that it can be effective even when user identity is difficult to pin down, so explicit connection authorization is problematic. In fact, you can do traffic pattern analysis at any level from single users to the entire network. Think of it as involving a kind of source/destination-address-logging process; at a given point, have I seen packets from or to this address or this subnetwork before? If not, then a more detailed analysis may be in order, or even an alert.

Building resilience against emerging security threats

Threats such as malware and data breaches almost always rely on misconfigured systems to succeed. Perhaps a default password hasn’t been changed, a cloud storage instance has been set to public, or a dangerous port is accidentally left open to the internet. These are all errors that can be hard to spot in a complex data centre. It’s a time-consuming task that may not be a top priority amongst competing business goals, meaning the vulnerability remains unidentified. Configuration management tools help here by scanning the entire estate, from cloud storage to local servers, websites to network devices, and more, identifying misconfigurations. They are vendor agnostic and surface anomalies that might otherwise go unnoticed – until it is too late. Auditing the estate in this way gives CISOs the visibility and control they need to effectively monitor their estate and be proactive in remediating misconfigurations. Armed with this insight, the company’s risk is reduced and its resilience is enhanced. 

SQL Server 2022: Here’s what you need to know

If you’ve ever looked at the claims for blockchains and thought that an append-only database could do that without all the work of designing and maintaining a distributed system that likely doesn’t scale to high-throughput queries (or the environmental impact of blockchain mining), another feature that started out in Azure SQL and is now coming to SQL Server 2022 is just what you need. “Ledger brings the benefits of blockchains to relational databases by cryptographically linking the data and their changes in a blockchain structure to make the data tamper-evident and verifiable, making it easy to implement multi-party business process, such as supply chain systems, and can also streamline compliance audits,” Khan explained. For example, the quality of an ice cream manufacturer’s ice cream depends on both the ingredients that its suppliers send and the finished ice cream it delivers being shipped at the right temperature. If the refrigerated truck has a fault, the cream might curdle, or the ice cream might melt and then refreeze once it’s in the store freezer. By collecting sensor information from everyone in its supply chain, the ice cream manufacturer can track down where the problem is.

7 benefits of using design review in your agile architecture practices

For an enterprise to practice architecture in an agile environment, several architects and designers must support the project's agile value streams, which are the actions taken from conception to delivery and support that add value to a product or service. One organization may have a distinct team of architects producing solution designs and templates. Another might place the architect inside the agile squad in a dual senior engineer role. ... The things involved in a design review include:The designer is the person who wants to solve a problem. The documentation is the document at the center of attention. It contains information regarding all aspects of the problem and the proposed solution. The reviewer is the person who will review the documentation. The process includes the agreed-upon rules and interactions that define the designer's and reviewer's communications. It may stand alone or be part of a bigger process. For example, in a software development life cycle, it could precede development, or in an API specification, it could include evaluating changes. The review scope is the area the reviewer tries to cover when reviewing the documentation (technical or not).

Quote for the day:

"You've got to risk the terrible and pathetic, in order to get to the graceful and elegant."

Daily Tech Digest - July 30, 2022

Google Drive vs OneDrive: Which cloud solution is right for you?

Google Drive is host to the majority of the cloud storage features that individuals have come to expect. Even with the free plan, users get access to a web interface, a mobile app, and sharing settings that can be adjusted at the admin level. Microsoft OneDrive users will enjoy similar functionality, including automatic syncing, where users indicate the files and folders they want to be backed up, so they are automatically synced with copies in the cloud. One of the biggest divides facing users when determining whether Google Drive or OneDrive is the best fit for them concerns their operating system of choice. ... Fans of Word, Excel, and the like, can still use Google Drive but may have to convert documents into Docs, Sheets, and other Google-made alternatives. That’s not a major issue but might affect how you perceive the performance of each cloud solution. Although there’s not much to choose from in terms of performance, it’s worth pointing out that Microsoft Office, which is usually employed as an offline tool, will take up more storage space than Google Workspace, which can be accessed via your web client. If storage is a major concern for you, this might be worth keeping in mind.

XaaS isn’t everything — and it isn’t serviceable

BPOs and XaaS do share a characteristic that might, in some situations, be a benefit but in most cases is a limitation, namely, the need to commoditize. This requirement isn’t a matter of IT’s preference for simplification, either. It’s driven by business architecture’s decision-makers’ preference for standardizing processes and practices across the board. This might not seem to be an onerous choice, but it can be. Providing a service that operates the same way to all comers no matter their specific and unique needs might cut immediate costs but can be, in the long run, crippling. Imagine, for example, that Human Resources embraces the Business Services Oriented Architecture approach, offering up Human Resources as a Service to its internal customers. As part of HRaaS it provides Recruiting as a Service (RaaS). And to make the case for this transformation it extols the virtues of process standardization to reduce costs. Imagine, then, that you’re responsible for Store Operations for a highly seasonal retailer, one that has to ramp up its in-store staffing from Black Friday through Boxing Day. 

Myth Busting: 5 Misconceptions About FinOps

“FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions.” ... With traditional procurement models, central teams retained visibility and control over expenditures. While this would add layers of time and effort to purchases, this was accepted as a worthwhile tradeoff. Part of the reason for FinOps has come into existence is that it enables teams to break away from the rigid, centrally controlled procurement models that used to be the norm. Rather than having a finance team that acts as a central gatekeeper and bottleneck, FinOps enables teams to fully leverage opportunities available for automation in the cloud. Compared to rigid, monthly, or quarterly budget cycles—and being blindsided by cost overruns long after the fact—teams move to continuous optimization. Real-time reporting and just-in-time processes are two of the core principles of FinOps. 

Selenium vs Cypress: Does Cypress Replace Selenium?

Cypress test framework captures snapshots during test execution. It enables QAs or software developers to hover over a precise command in the Command Log to notice exactly what happened at that specific phase. One does not require adding implicit or explicit wait commands in testing scripts, unlike Selenium. It waits for assertions and commands automatically. QAs or Developers can use Stubs, Clocks, and Spies to validate and control the behavior of server responses, timers, or functions. The automatic scrolling operation makes sure that the component is in view prior to performing any activity (for instance Clicking on a button). Previously Cypress supported only Google Chrome tests but, with current updates, Cypress now offers support for Mozilla Firefox as well as Microsoft Edge browsers. As the developers or programmer writes commands, this tool executes them in real-time, giving visual feedback as they run. It also carries brilliant documentation. Test execution for a local Se Grid can be ported to function with a cloud-based Selenium Grid with the least effort.

5 Advanced Robotics And Industrial Automation Technologies

One of the most critical advances in robotics and industrial automation technologies is the development of autonomous vehicles. These vehicles can drive themselves, making them safer and more efficient than traditional vehicles. Autonomous vehicles can be used in a variety of ways. For example, they can be used to transport goods around a factory. They can also be used to help people search for objects or people. In all cases, autonomous vehicles are much safer than traditional vehicles. As autonomous vehicles become more common, they will significantly impact the automotive industry. They will reduce the time people need to spend driving cars. They will also reduce the number of accidents that happen on the road. ... One of the most critical safety features of advanced robotics and industrial automation technologies is their danger detection systems. These systems help to protect workers from dangerous situations. One type of danger detection system is the automatic emergency braking system. This system uses cameras and sensors to detect obstacles on the road and brake automatically if necessary.

How to use the Command pattern in Java

The Command pattern is one of the 23 design patterns introduced with the Gang of Four design patterns. Command is a behavioral design pattern, meaning that it aims to execute an action in a specific code pattern. When it was first introduced, the Command pattern was sometimes explained as callbacks for Java. While it started out as an object-oriented design pattern, Java 8 introduced lambda expressions, allowing for an object-functional implementation of the Command pattern. This article includes an example using a lambda expression in the Command pattern. As with all design patterns, it's very important to know when to apply the Command pattern, and when another pattern might be better. Using the wrong design pattern for a use case can make your code more complicated, not less. We can find many examples of the Command pattern in the Java Development Kit, and in the Java ecosystem. One popular example is using the Runnable functional interface with the Thread class. Another is handling events with an ActionListener.

Singleton Design Pattern in C# .NET Core – Creational Design Pattern

The default constructor of the Singleton class is private, by making the constructor private the client code has been restricted from directly creating the instance of the Singleton class. In absence of the public constructor, the only way to get the object of the Singleton class is to use the global method to request an object i.e. the static GetInstance() method in the Singleton class should be used to get the object of the class. The GetInstance() method creates the object of the Singleton class when it is called for the first time and returns that instance. All the subsequent requests for objects to the GetInstance() method will get the same instance of the Singleton class which was already created during the first request. This standard implementation is also known as lazy instantiation as the object of the singleton class is created when it is required i.e. when there is a request for the object to the GetInstance() method. The main problem with the standard implementation is that it is not thread-safe. Consider a scenario where 2 different requests hit the GetInstances() method at the same time so in that case, there is a possibility that two different objects might get created by both the requests.

Here’s when to use data visualization tools, and why they’re helpful

Successful data visualization tools will help you understand your audience, set up a clear framework to interpret data and draw conclusions, and tell a visual story that might not come off as clean and concise with raw data points. Data visualization tools—when used properly—will help to better tell a given story and make it possible to better pull information, see trending patterns, and draw conclusions from large data sets. Data visualization tools also lean into a more aesthetically pleasing approach to mapping and tracking data. It goes beyond simply pasting information onto a pie chart and instead uses design know-how, color theory, and other practices to ensure information is presented in an interesting but easy-to-understand manner. Although data visualization tools have always been popular in the design space, the right data visualization tools can aid just about any field of work or personal interest. For example, data visualization tools can help journalists and editors track trending news stories to better understand reader interest.

9 Tips for Modernizing Aging IT Systems

Once you’ve identified where the failures are in aging systems, compute the costs in fixes, patches, upgrades, and add-ons to bring the system up to modern requirements. Now add any additional costs likely to be incurred in the near future to keep this system going. Compare the total to other available options, including a new or newer system. “While this isn’t a one-size-fits-all approach, the last 2.5 years have proven just how quickly priorities can change,” says Brian Haines, chief strategy officer for FM:Systems, an integrated workspace management system software provider. “Rather than investing in point solutions that may serve the specific needs of the organization today, a workplace tech solution that offers the ability to add or even remove certain functions later to the same system means organizations can more efficiently respond to ever-changing business, employee, workplace, visitor and even asset needs going forward.” “This also helps IT teams drastically reduce the time needed to shop for, invest in, and deploy a separate solution that may or may not be compatible,” Haines adds.

CISA Post-Quantum Cryptography Initiative: Too Little, Too Late?

Mike Parkin, senior technical engineer at Vulcan Cyber, a provider of SaaS for enterprise cyber risk remediation, agreed that the move comes a little late, but said CISA’s initiative is still a good step. “People have been saying for years that the development of quantum computing would lead to the end of cryptography as we know it,” he said. “With developments in the field bringing us closer to a usable quantum computer, it’s past time to think about how to deal with the future of cryptography.” He pointed out the modern internet relies heavily on cryptography across the board, and quantum computing has the potential to break a lot of that encryption, rendering it effectively useless. “That, in turn, would effectively break many of the internet services we’ve all come to rely on,” Parkin said. “Quantum computing is not yet to the point of rendering conventional encryption useless—at least that we know of—but it is heading that way.” He said he believes the government is in the position to set encryption standards and expectations for normal use and can work closely with industry to make sure the standards are both effective and practical.

Quote for the day:

"It is better to fail in originality than to succeed in imitation." -- Herman Melville

Daily Tech Digest - July 29, 2022

How to apply security at the source using GitOps

Let’s talk about the security aspects now. Most security tools detect potential vulnerabilities and issues at runtime (too late). In order to fix them, either a reactive manual process needs to be performed (e.g., modifying directly a parameter in your k8s object with kubectl edit) or ideally, the fix will happen at source and will be propagated all along your supply chain. This is what is called “Shift Security Left”. From fixing the problem when it is too late to fixing it before it happens. This doesn’t mean every security issue can be fixed at the source, but adding a security layer directly at the source can prevent some issues. ... Imagine you introduced a potential security issue in a specific commit by modifying some parameter in your application deployment. Leveraging the Git capabilities, you can rollback the change if needed directly at the source and the GitOps tool will redeploy your application without user interaction. ... Those benefits are good enough to justify using GitOps methodologies to improve your security posture and they came out of the box, but GitOps is a combination of a few more things. 

How to Open Source Your Project: Don’t Just Open Your Repo!

When opening your source code, the first task should be to select a license that fits your use case. In most cases, it is advisable to include your legal department in this discussion, and GitHub has many great resources to help you with this process. For StackRox, we oriented ourselves on similar Red Hat and popular open source projects and picked Apache 2.0 where possible. After you’ve decided on what parts you open up and how you will open them, the next question is, how will you make this available? Besides the source code itself, for StackRox, there are also Docker images, as mentioned. That means we also open the CI process to the public. For that to happen, I highly recommend you review your CI process. Assume that any insecure configuration will be used against you. Review common patterns for internal CI processes like credentials, service accounts, deployment keys or storage access. Also, it should be abundantly clear who can trigger CI runs, as your CI credits/resources are usually quite limited, and CI integrations have been known to run cryptominers or other harmful software.

New hardware offers faster computation for artificial intelligence, with much less energy

Programmable resistors are the key building blocks in analog deep learning, just like transistors are the core elements for digital processors. By repeating arrays of programmable resistors in complex layers, researchers can create a network of analog artificial "neurons" and "synapses" that execute computations just like a digital neural network. This network can then be trained to achieve complex AI tasks like image recognition and natural language processing. A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain. Moreover, this inorganic material also makes the resistor extremely energy-efficient. Unlike materials used in the earlier version of their device, the new material is compatible with silicon fabrication techniques.

Simplifying DevOps and Infrastructure-as-Code (IaC)

Configuration drift is what happens when a declarative forward engineering model no longer matches the state of a system. It can happen when a developer changes a model’s code without updating the systems built using that model. It could also be the result of an engineer that does exploratory ad-hoc operations and changes a system but fails to go back into the template and update its code. Is it realistic to ban operators from ad-hoc exploration? Actually, some companies have policies forbidding any operator or developer to touch a live production environment. Ironically, when a production system breaks, it’s the first rule overridden: Ad-hoc exploration is welcomed by anyone able to get to the bottom of the issue. Engineers who adopt IaC usually don’t like the work that comes with remodeling an existing system. Still, because of high demand fueled by user frustration, there’s a tool for every configuration management language—they just fail to live up to engineer expectations. The best-case scenario is that an engineer can use a reverse-engineered template to copy and paste segments into one but will need to manually write it elsewhere.

Is Your Team Agile Or Fragile?

Agility is the ability to move your body quickly and easily and also the ability to think quickly and clearly, so it makes sense why project managers chose this term. The opposite of agile is fragile, rigid, clumsy, slow, etc. Not only are they not flattering adjectives, but they are also dangerous traits if they were descriptions of a team within an organization. ... How many times have you been in a meeting and you have an idea or you disagree with what most participants agree on—but you don't say anything and you conform to the norms? Why don't you speak up? In general, there might be two reasons for it. The most likely one is the lack of psychological safety, the feeling that your idea would be shut down or ignored. Perhaps you worried you would ruin team harmony, or maybe you quickly started doubting yourself. That is one of the guaranteed ways of preventing progress and limiting growth. Psychological safety is the number-one trait of high-performing teams, according to Google’s Aristotle research. It is the ability to speak up without fearing any negative consequences, combined with the willingness to contribute, which leads us to the third factor.

The Heart of the Data Mesh Beats Real-Time with Apache Kafka

A data mesh enables flexibility through decentralization and best-of-breed data products. The heart of data sharing requires reliable real-time data at any scale between data producers and data consumers. Additionally, true decoupling between the decentralized data products is key to the success of the data mesh paradigm. Each domain must have access to shared data but also the ability to choose the right tool (i.e., technology, API, product, or SaaS) to solve its business problems. The de facto standard for data streaming is Apache Kafka. A cloud-native data streaming infrastructure that can link clusters with each other out-of-the-box enables building a modern data mesh. No Data Mesh will use just one technology or vendor. Learn from inspiring posts from your favorite data products vendors like AWS, Snowflake, Databricks, Confluent, and many more to successfully define and build your custom Data Mesh. Data Mesh is a journey, not a big bang. A data warehouse or data lake (or in modern days, a lakehouse) cannot be the only infrastructure for data mesh and data products. 

Microservice Architecture, Its Design Patterns And Considerations

Microservices architecture is one of the most useful architectures in the software industry. These can help in the creation of a lot of better software applications if followed properly. Here you’ll get to know what microservices architecture is, the design patterns necessary for its efficient implementation, and why and why not to use this architecture for your next software. ... Services in this pattern are easy to develop, test, deploy and maintain individually. Small teams are sufficient and responsible for each service, which reduces extensive communication and also makes things easier to manage. This allows teams to adopt different technology stacks, upgrading technology in existing services and scale, and change or deploy each service independently. ... Both microservices and monolithic services are architectural patterns that are used to develop software applications in order to serve the business requirements. They each have their own benefits, drawbacks, and challenges. On the one hand, when Monolithic Architectures serve as a Large-Scale system, this can make things difficult. 

Discussing Backend For Front-end

Mobile clients changed this approach. The display area of mobile clients is smaller: just smaller for tablets and much smaller for phones. A possible solution would be to return all data and let each client filter out the unnecessary ones. Unfortunately, phone clients also suffer from poorer bandwidth. Not every phone has 5G capabilities. Even if it was the case, it’s no use if it’s located in the middle of nowhere with the connection point providing H+ only. Hence, over-fetching is not an option. Each client requires a different subset of the data. With monoliths, it’s manageable to offer multiple endpoints depending on each client. One could design a web app with a specific layer at the forefront. Such a layer detects the client from which the request originated and filters out irrelevant data in the response. Over-fetching in the web app is not an issue. Nowadays, microservices are all the rage. Everybody and their neighbours want to implement a microservice architecture. Behind microservices lies the idea of two-pizzas teams. 

Temenos Benchmarks Show MongoDB Fit for Banking Work

In a mock scenario, Temenos’ research team created 100 million customers with 200 million accounts through pushed through 24,000 transactions and 74,000 MongoDB queries a second. Even with that considerable workload, the MongoDB database, running on an Amazon Web Services’ M80 instance, consistently kept response times under a millisecond, which is “exceptionally consistent,” Coleman said. (This translated into an overall response time for the end user’s app at around 18 milliseconds, which is pretty much so fast as to be unnoticeable). Coleman compared this test with an earlier one the company did in 2015, using all Oracle gear. He admits this is not a fair comparison, given the older generation of hardware. Still, the comparison is eye-opening. In that setup, an Oracle 32-core cluster was able to push out 7,200 transactions per second. In other words, a single MongoDB instance was able to do the work of 10 Oracle 32 core clusters, using much less power.

The role of the CPTO in creating meaningful digital products

It’s easy to see that product, design and more traditional business analysis have a lot of crossover, so where do you draw the line, if at all? This then poses the problem of scale. Having a CPTO over a team of 100-150 is fine, but if you scale that to the multiple disciplines over hundreds of people, it starts to become a genuine challenge. Again, having strong “heads of” protects you from this, but it forces the CxTO to become slightly more abstracted from reality. Creating autonomy (and thus decentralisation) in established product delivery teams helps make scaling less of a problem, but it requires both discipline and trust. Could you do this with both a CPO and a CTO? Yes, but by making it a person’s explicit role to harmonise the two concerns, you create a more concrete objective, and there is a need to resolve any conflict between the two worlds, centralising accountability. In reality, many CTOs are starting to pick up product thinking, and I’m sure the reverse is also true of CPOs. However, switching between two idioms is a big ask. It’s tiring (but rewarding), and could mean your focus can’t be as deep in either of the two camps.

Quote for the day:

"It is one thing to rouse the passion of a people, and quite another to lead them." -- Ron Suskind

Daily Tech Digest - July 28, 2022

The Beautiful Lies of Machine Learning in Security

The biggest challenge in ML is availability of relevant, usable data to solve your problem. For supervised ML, you need a large, correctly labeled dataset. To build a model that identifies cat photos, for example, you train the model on many photos of cats labeled "cat" and many photos of things that aren't cats labeled "not cat." If you don’t have enough photos or they're poorly labeled, your model won't work well. In security, a well-known supervised ML use case is signatureless malware detection. Many endpoint protection platform (EPP) vendors use ML to label huge quantities of malicious samples and benign samples, training a model on "what malware looks like." These models can correctly identify evasive mutating malware and other trickery where a file is altered enough to dodge a signature but remains malicious. ML doesn't match the signature. It predicts malice using another feature set and can often catch malware that signature-based methods miss. However, because ML models are probabilistic, there's a trade-off. ML can catch malware that signatures miss, but it may also miss malware that signatures catch. 

6 Machine Learning Algorithms to Know About When Learning Data Science

Decision trees are models that resemble a tree like structure containing decisions and possible outcomes. They consist of a root node, which forms the start of our tree, decision nodes which are used to split the data based on a condition, and leaf nodes which form the terminal points of the tree and the final outcome. Once a decision tree has been formed, we can use it to predict values when new data is presented to it. ... Random Forest is a supervised ensemble machine learning algorithm that aggregates the results from multiple decision trees, and can be applied to classification and regression based problems. Using the results from multiple decision trees is a simple concept and allows us to reduce the problem of overfitting and underfitting experienced with a single decision tree. To create a Random Forest we first need to randomly select a subset of samples and features from the main dataset, a process known as “Bootstraping”. This data is then used to build a decision tree. Carrying out bootstrapping avoids issues of the decision trees being highly correlated and improves model performance.

Data science isn’t particularly sexy, but it’s more important than ever

Not only is data cleansing an essential part of data science, it’s actually where data scientists spend as much as 80% of their time. It has ever been thus. As Mike Driscoll described in 2009, such “data munging” is a “painful process of cleaning, parsing and proofing one’s data.” Super sexy! Now add to that drudgery the very real likelihood that enterprises, as excited as they are to jump into data science, many lack “a suitable infrastructure in place to start getting value out of AI,” as Jonny Brooks has articulated: The data scientist likely came in to write smart machine learning algorithms to drive insight but can’t do this because their first job is to sort out the data infrastructure and/or create analytic reports. In contrast, the company only wanted a chart that they could present in their board meeting each day. The company then gets frustrated because they don’t see value being driven quickly enough and all of this leads to the data scientist being unhappy in their role. As I have written before: “Data scientists join a company to change the world through data, but quit when they realize they’re merely taking out the data garbage.”

Top 7 Skills Required to Become a Data Scientist

Having a deep understanding of machine learning and artificial intelligence is a must to have to implement tools and techniques in different logic, decision trees, etc. Having these skill sets will enable any data scientist to work and solve complex problems specifically that are designed for predictions or for deciding future goals. Those who possess these skills will surely stand out as proficient professionals. With the help of machine learning and AI concepts, an individual can work on different algorithms and data-driven models, and simultaneously can work on handling large data sets such as cleaning data by removing redundancies. ... The base of establishing your career as a data science professional will require you to have the ability to handle complexity. One must ensure to have the capability to identify and develop both creative and effective solutions as and when required. You might face challenges in finding out ways to develop any solution that possibly needs to have clarity in concepts of data science by breaking down the problems into multiple parts to align them in a structured way.

The Psychology Of Courage: 7 Traits Of Courageous Leaders

Like so many complex psychological human characteristics, courage can be difficult to nail down. On the surface, courage seems like one of those “I know it when I see it” concepts. In my twenty years spent facilitating and coaching innovation, creativity, strategy and leadership programs, and in partnership with Dr. Glenn Geher of the Psychology Department of the State University of New York at New Paltz, I’ve identified behavioral attributes that often correlate with a person’s access to their courage. Each attribute has influential effects on organizational culture at all levels. Fostering these attributes in your own life (at work and beyond) and within your team can help you lead toward the courageous future you’re striving to achieve. ... Courage requires taking intentional risks. And the bigger the risk, the more courage it takes (and the bigger the outcome can be). Those who understand the importance of facing fear and being vulnerable, who accept that falling and getting up again is part of the journey, tend to have quicker access to their courage.

There is a path to replace TCP in the datacenter

"The problem with TCP is that it doesn't let us take advantage of the power of datacenter networks, the kind that make it possible to send really short messages back and forth between machines at these fine time scales," John Ousterhout, Professor of Computer Science at Stanford, told The Register. "With TCP you can't do that, the protocol was designed in so many ways that make it hard to do that." It's not like the realization of TCP's limitations is anything new. There has been progress to bust through some of the biggest problems, including in congestion control to solve the problem of machines sending to the same target at the same time, causing a backup through the network. But these are incremental tweaks to something that is inherently not suitable, especially for the largest datacenter applications (think Google and others). "Every design decision in TCP is wrong for the datacenter and the problem is, there's no one thing you can do to make it better, it has to change in almost every way, including the API, the very interface people use to send and receive data. It all has to change," he opined.

Typemock Simplifies .NET, C++ Unit Testing

When testing legacy code, you need to test small parts of the logic one by one, such as the behavior of a single function, method or class. To do that the logic must be isolated from the legacy code, he explained. As Jennifer Riggins explained in a previous post, unit testing differs from integration testing, which focuses on the interaction between these units or components, and catches errors at the unit level earlier, so the cost of fixing them is dramatically reduced. ... Typemock uses special code that can intersect with the flow of the software, and instead of calling the real code, it doesn’t matter whether it’s a real method or a virtual method, it can intercept it, and you can fake different things in the code, he said. Typemock has been around since 2004 when Lopian launched the company with Roy Osherove, a well-known figure in test-driven development. They first released Typemock Isolator in 2006, a tool for unit testing SharePoint, WCF and other .NET projects. Isolator provides an API helps users write simple and human-readable tests that are completely isolated from the production code.

Why Web 3.0 Will Change the Current State of the Attention Economy Drastically

The attention economy requires improvements, and Web 3.0 is capable of making them happen. In the foreseeable future, it will drastically change the interplay between consumers, advertisers and social media platforms. Web 3.0 will give power to the people. It may sound pompous, but it's true. How is that possible? Firstly, Web 3.0 will grant users ownership of their data, so you'll be able to treat your data like it's your property. Secondly, it will enable you to be paid for the work you are doing when making posts and giving likes on social media. Both options provide you with the opportunity to monetize the attention that you give and receive. The agreeable thing about Web 3.0 is that it's all about honest ownership. If a piece of art can be an NFT with easily traceable ownership, your data can be too. If you own your data, you can monetize or offer it on your terms, knowing who is going to use it and how. For instance, there is Permission, a tokenized Web 3.0 advertising platform that connects brands with consumers, with the latter getting crypto rewards for their data and engagement. 

Serverless-first: implementing serverless architecture from the transformation outset

While a serverless-first mindset provides a range of benefits, some businesses may be hesitant to make the transition due to concerns around cloud provider security, vendor lock-in, sunk costs from other strategies and ongoing issues with debugging and development environments. However, even among the most serverless-adverse, this mindset can provide benefits to a select part of an organisation. Take for example a bank’s operations. While the maintenance of a traditional network infrastructure is crucial for uptime of the underlying database, with a serverless approach they have the freedom to implement an agile mindset with consumer-facing apps and technologies as demand grows. Agile and serverless strategies typically go hand-in-hand, and both can encourage quick development, modification and adaptation. In relation to concerns around vendor lock-in, some organisations may look towards a cloud-agnostic strategy. However, writing software for multiple clouds removes the ability to use features offered by one specific cloud, meaning any competitive advantage of using a specific vendor is then lost. 

CISO in the Age of Convergence: Protecting OT and IT Networks

Pan Kamal, head of products at BluBracket, a provider of code security solutions, says one of the first steps an organization can take is to create an IT-OT convergence task force that maps out the asset inventory and then determine where IT security policy needs to be applied within the OT domain. “Review industry-specific cybersecurity regulations and prioritize implementation of mandatory security controls where called for,” Kamal adds. “I also recommend investing in a converged dashboard -- either off the shelf or create a custom dashboard that can identify vulnerabilities and threats and prioritize risk by criticality.” Then, organizations must examine the network architecture to see if secure connections with one-way communications -- via data diodes for example -- can eliminate the possibility of an intruder coming in from the corporate network and pivoting to the OT network Another key element is conducting a review of security policies related to both the equipment and the software supply chain, which can help identify secrets in code present in git repositories and help remediate them prior to the software ever being deployed.

Quote for the day:

"Inspired leaders move a business beyond problems into opportunities." -- Dr. Abraham Zaleznik

Daily Tech Digest - July 27, 2022

Zycus: Five digital transformation trends in procurement

Generating vast quantities of data, organisations need to be aware of the level of data management required in order to successfully deliver a digital transformation in procurement. “Not understanding the data implications may result in budget overruns, overtime, or scope reduction in data management. Data is a key input for many processes and decisions in modern organisations, and underestimating its relevance can cause an inability to meet goals related to supplier enablement or PO automation due to capacity and scope constraints,” said Zycus. When it comes to the quality of data, process digitalisation is a key driver. Process digitalisation reduces human error; generates greater business insights, improves decision-making capabilities, and increase value creation. ... “In recent years, Procurement departments have become more prone to cyberattacks in the form of malware via a software update, attacks on cloud services, ransomware, business email compromise, attack on supply chain, etc.,” commented Zycus. Such threats can result in a loss of sensitive data and/or financial losses. 

Striving for a better balance in the future of work

It is worth noting that the principle of coordinated working hours in offices grew out of working patterns in factories at a time when the technology for business was mainly an in-person exercise. Yet, as everyone who has been through the pandemic knows, knowledge workers no longer work that way ‚ we’re asynchronous, remote, and international. In many senses, this change in expectations is no change at all. Knowledge work has always been marked by a sense of asynchronicity. People meet, talk, agree, and then go off and work in small groups or alone. What has changed is that 65% of workers now have, and expect, more flexibility to decide when they work. ... Perhaps one of the most boringly predictable challenges remote workers face involves the tools they’re asked to use. On average, workers have 6.2 apps sending them notifications at work, and 73% of them respond to those outside of working hours, further eroding the division between (asynchronous) work time and personal time. ... A worker may find that they do their work at times that suit them best, but still feel pressurized to pretend to be present the rest of the time, too.

The Metaverse can shake up Digital Commerce forever

The metaverse has already become a playground for luxury fashion brands, with some launching their new collection in the virtual world and others partnering up with developers to create their own bespoke games. In the near future, we anticipate more brands to follow and break the boundaries between virtual and physical reality to create more innovative, meaningful interactions with consumers. We are in the very early days here and our team will be working on many different pilots and experiments. There are several use-cases for Web 3.0 in e-commerce. For example, brands looking to connect with loyal users and fans can provide additional value by way of gated commerce enabled through NFTs. At the same time, brands and artists can use NFTs to build and monetize communities. We can create immersive shopping experiences in the Virtual Worlds/Metaverse, an ever-expanding world of real-time, with the help of virtual spaces in 3D. We can also enable e-commerce landscapes based on the Blockchain that will allow anyone to trade physical products on-chain.

SaaS Security Risk and Challenges

SaaS providers are unlikely to send infrastructure- and application-level security event logs to customers’ security information and event management (SIEM) solutions, leaving customers’ security operations teams lacking in terms of important information. This diminishes the ability to identify and manage potential security incidents. For example, it can be difficult to know whether and when a brute-force password replay attack is perpetrated against a SaaS customer user account. Such attacks could lead to undetected data breaches, resulting in the organization being considered liable for the data leak and for not reporting the incident to the appropriate parties (e.g., employees, customers, authorities) in a timely manner. ... It can be challenging for customers to understand the fundamental nature of a SaaS provider’s risk culture. Audits, certifications, questionnaires, and other materials paint a narrow picture of the providers’ security posture. Moreover, SaaS providers are unlikely to share their risk register with customers, as this would reveal excessive details about the SaaS provider’s security posture. Further, SaaS providers are unlikely to undergo detailed customer audits due to limited resources. 

Optimize Distributed AI Training Using Intel® oneAPI Toolkits

Supervised learning requires large amounts of labeled data. Labeling and annotation must be done manually by human experts, so it is laborious and expensive. Semi-supervised learning is a technique where both labeled and unlabeled data are used to train the model. Usually, the number of labeled data points is significantly less than the unlabeled data points. Semi-supervised learning exploits patterns and trends in data for classification. Semi-supervised learning is a technique where both labeled and unlabeled data are used to train the model. Usually, the number of labeled data points is significantly less than the unlabeled data points. Semi-supervised learning exploits patterns and trends in data for classification. S-GANs tackle the requirement for vast amounts of training data by generating data points using generative models. The generative adversarial network (GAN) is an architecture that uses large, unlabeled datasets to train an image generator model via an image discriminator model. GANs comprise two models: generative and discriminative. 

The rise of adaptive cybersecurity

The desirable end state - easier said than done - is to embrace an adaptive cybersecurity posture, supported by people, process and technology - that is more responsive to the dynamism of today's cybersecurity landscape. As research firm Ecosystm notes, "anticipating threats before they happen and responding instantly when attacks occur is critical to modern cybersecurity postures. It is equally important to be able to rapidly adapt to changing regulations. Companies need to move towards a position where monitoring is continuous, and postures can adapt, based on risks to the business and regulatory requirements. This approach requires security controls to automatically sense, detect, react, and respond to access requests, authentication needs, and outside and inside threats, and meet regulatory requirements." Adaptation is also likely in future to involve artificial intelligence. A golden example of applying AI adaptively for cybersecurity would be to be able to detect the presence of code, packages or dependencies that are impacted by zero-days or other vulnerabilities, and to block those threats. 

The Software Architecture Handbook

One problem that comes up when implementing microservices is that the communication with front-end apps gets more complex. Now we have many servers responsible for different things, which means front-end apps would need to keep track of that info to know who to make requests to. Normally this problem gets solved by implementing an intermediary layer between the front-end apps and the microservices. This layer will receive all the front-end requests, redirect them to the corresponding microservice, receive the microservice response, and then redirect the response to the corresponding front-end app. The benefit of the BFF pattern is that we get the benefits of the microservices architecture, without over complicating the communication with front-end apps. ... Horizontally scaling on the other hand, means setting up more servers to perform the same task. Instead of having a single server responsible for streaming we'll now have three. Then the requests performed by the clients will be balanced between those three servers so that all handle an acceptable load.

The Rise of Domain Experts in Deep Learning

Nowadays, a lot of it is people who are like, “Oh, my god, I feel like deep learning is starting to destroy expertise in my industry. People are doing stuff with a bit of deep learning that I can’t even conceive of, and I don’t want to miss out.” Some people are looking a bit further ahead, and they’re more, like, “Well, nobody is really using deep learning in my industry, but I can’t imagine it’s the one industry that’s not going to be affected, so I want to be the first.” Some people definitely have an idea for a company that they want to build. The other thing we get a lot of is companies sending a bunch of their research or engineering teams to do the course just because they feel like this is a corporate capability that they ought to have. And it’s particularly helpful with the online APIs that are out there now that people can play around with — Codex or DALL-E or whatever — and get a sense of, “Oh, this is a bit like something I do in my job, but it’s a bit different if I could tweak it in these ways.” However, these models also have the unfortunate side effect, maybe, of increasing the tendency of people to feel like AI innovation is only for big companies, and that it’s outside of their capabilities.

Q&A: Dropbox exec outlines company's journey into a remote-work world

"Underneath virtual first is a number of tenets that define how we think about the future of work. One of those is ‘asynchronous by default,' the idea being that if we're going to have people working remotely, that shouldn't mean they spend eight hours a day on video calls. Instead, at Dropbox, you're measured on your output and the impact that you make, rather than how many meetings you can sit in. "That then led us to think about how much time we should be spending in meetings, and as a result, we rolled out something called ‘core collaboration hours’ where employees reserve four hours each day to be available for meetings. That means there’s times when you're open to meet with your team or anyone else in the company, but also that you've got those other four hours in the day to focus on the work that you need to do. "Does that mean you wouldn't flex that to meet with somebody who's in a different time zone or something else? Absolutely not. It's your time to manage as an individual, because we're measuring you on the impact and output that you're making.

India poised to be at the center of metaverse-based gaming

Much before metaverse became popular, VR games like Minecraft and Roblox had captivated scores of young gamers. The immersive gaming experience delivered by AR/VR and the rapid growth of devices powered by AR/VR and XR has further accelerated the growth of metaverse to the current level. Meanwhile, the growth of high-speed Internet has acted as the catalyst driving this transformation. While VR heads top the list of gaming devices in the metaverse, mobile phones, gaming PCs, gaming consoles, and hearable/wearables are also evolving to suit the demands of metaverse applications. Metaverse also blends games with other apps like live streaming, cryptocurrencies, and social media, creating several possibilities for players to transact across the ecosystem chain. For example, gamers can use the NFTs/cryptocurrencies in metaverse to purchase digital assets, which they can preserve for another game, maybe from a different publisher. Thus players will earn greater value for money while also enjoying a near-real world gaming experience with possibilities never imagined before. 

Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - July 26, 2022

Don’t get too emotional about emotion-reading AI

Unfortunately, the “science” of emotion detection is still something of a pseudoscience. The practical trouble with emotion detection AI, sometimes called affective computing, is simple: people aren’t so easy to read. Is that smile the result of happiness or embarrassment? Does that frown come from a deep inner feeling, or is it made ironically or in jest. Relying on AI to detect the emotional state of others can easily result in a false understanding. When applied to consequential tasks, like hiring or law enforcement, the AI can do more harm than good. It’s also true that people routinely mask their emotional state, especially in in business and sales meetings. AI can detect facial expression, but not the thoughts and feelings behind them. Business people smile and nod and empathetically frown because it’s appropriate in social interactions, not because they are revealing their true feelings. Conversely, people might dig deep, find their inner Meryl Streep and feign emotion to get the job or lie to Homeland Security. In other words, the knowledge that emotion AI is being applied creates a perverse incentive to game the technology.

How AI and decision intelligence are changing the way we work

Technology can also provide a simple yet powerful AI tool for employees to use during their day-to-day activities. They can capture lessons learned as they work in real time, and adjust their actions when a corrective action is needed, also in real time. Throughout this process, AI defines actionable takeaways, shares insights and offers concise lessons learned (suggesting corrective actions, for example), all of which can boost the entire team’s performance. Since AI turns the data collected from daily work into actionable lessons learned, every team member can contribute to and draw on their team’s collective knowledge — and the entire company’s collective knowledge as well. The technology prompts them to capture their work, and it “knows” when a team member should see information relevant to their current task. AI ensures everyone has the right data at the right time, exactly when they need it. In this vision of a data-driven environment, access to data liberates and empowers employees to pursue new ideas, Harvard Business Review writes.

The emergence of multi-cloud networking software

Contrary to general perception, Hielscher argues that many enterprises do not voluntarily choose to operate within a multi-cloud environment. In many cases, the environment is thrust upon them through a merger, acquisition, or an isolated departmental choice that preceded a decision to consolidate architectures. "This results in organizational gaps, skill-set gaps, and contractual and spending overlaps," he explains. "As with any IT strategy, the first step is to establish which goals are to be addressed and the timeframes to address them in." Potential adopters should be prepared to spend both time and money when evaluating and comparing MCNS products. "For example, organizations should plan costs associated with staffing a team of engineers to see them through the evaluation process," Howell says. While virtually all large cloud-focused enterprises, and many smaller organizations, can benefit from the right MCNS, it's important to keep an eye on service and the bottom line. "Benefits to the enterprise must be greater than the cost of the solution," Howell warns.

Software Methodologies — Waterfall vs Agile vs DevOps

Software development projects that are clearly defined, predictable, and unlikely to undergo considerable change are best handled using the waterfall method. Typically, smaller, simpler undertakings fall under this category. Waterfall projects don't incorporate feedback during the development cycle, is rigid in their process definition, and offer little to no output variability. Agile methods are built on incremental, iterative development that promptly produces a marketable business product. The product is broken down into smaller pieces throughout incremental development, and each piece is built, tested, and modified. Agile initiatives don't begin with thorough definitions in place. They rely on ongoing feedback to guide their progress. In Agile development, DevOps is all about merging teams and automation. Agile development is adaptable to both traditional and DevOps cultures. In contrast to a typical dev-QA-ops organization, developers do not throw code over the wall in DevOps. In a DevOps setup, the team is in charge of overseeing the entire procedure.

Why you need to protect abandoned digital assets

The dangers posed by these abandoned assets are multifarious. Local digital assets can be usurped and used for malicious purposes, such as identity theft and credit card fraud. Not only does this leave organisations open to significant fines for breaches of data protection laws, there is the associated reputational harm caused by these incidents. “The risk depends what the connection is pointing to and what authentication or security measures have been put in place,” says Nahmias. “Security teams tend to be more lenient about connections to internal resources than they are about connections to external ones.” The distributed nature of modern enterprise means that networks are no longer spiders webs, but a complex mesh. While this is a far more robust form of network connectivity, there are also far more connections that need to be managed. As such, there is a potential risk of network connections from abandoned assets still being active, essentially permitting access to the rest of the corporate network. In many ways, this is a far greater risk to the organisation, as malicious actors could potentially obtain confidential information through these unsecured connections.

How the cybersecurity skills gap threatens your business

The deficit in skilled cybersecurity personnel is now directly affecting businesses’ ability to remain secure. The World Economic Forum has stated that 60 per cent would “find it challenging to respond to a cybersecurity incident owing to the shortage of skills within their team” and industry body ISACA found that 69 per cent of those businesses that have suffered a cyber attack in the past year were somewhat or significantly understaffed. The impacts can be devastating. Accreditation body ISC(2)’s Cybersecurity Workforce Study found that staff shortages were leading to misconfigured systems, tardy patching of systems, lack of oversight, insufficient risk assessment, lack of threat awareness and rushed deployments. With these shortages now jeopardising businesses’ ability to function, the hiring function is under significant pressure to up its game. To make matters worse, these shortages are expected to intensify. Last year the Department for Culture, Media and Sport (DCMS) predicted there would be an annual shortfall of 10,000 new entrants into the cybersecurity market but in its latest report, released in May, that was revised to 14,000 every year. 

Kanban vs Scrum: Differences

Kanban is a project management method that helps you visualize the project status. Using it, you can readily visualize which tasks have been completed, which are currently in progress, and which tasks are still to be started. The primary aim of this method is to find out the potential roadblocks and resolve them ASAP while continuing to work on the project at an optimum speed. Besides ensuring time quality, Kanban ensures all team members can see the project and task status at any time. Thus, they can have a clear idea about the risks and complexity of the project and manage their time accordingly. However, the Kanban board involves minimal communication. ... Scrum is a popular agile method ideal for teams who need to deliver the product in the quickest possible time. This involves repeated testing and review of the product. It focuses on the continuous progress of the product by prioritizing teamwork. With the help of Scrum, product development teams can become more agile and decisive while becoming responsive to surprising and sudden changes. Being a highly-transparent process, it enables teams and organizations to evaluate projects better as it involves more practicality and fewer predictions.

8 top SBOM tools to consider

Indeed, SBOMs are no longer just a good idea; they're a federal mandate. According to President Joe Biden's July 12, 2021, Executive Order on Improving the Nation’s Cybersecurity, they're a requirement. The order defines an SBOM as "a formal record containing the details and supply chain relationships of various components used in building software." It's an especially important issue with open-source software, since "software developers and vendors often create products by assembling existing open-source and commercial software components." Is that true? Oh yes. We all know that open-source software is used everywhere for everything. But did you know that managed open-source company Tidelift counts 92% of applications containing open-source components. In fact, the average modern program comprises 70% open-source software. Clearly, something needs doing. The answer, according to the Linux Foundation, Open Source Security Foundation (OpenSSF), and OpenChain are SBOMs. Stephen Hendrick, the Linux Foundation's vice president of research, defines SBOMs as "formal and machine-readable metadata that uniquely identifies a software package and its contents; it may include other information about its contents, including copyrights and license data.

The race to build a social media platform on the blockchain

DSCVR, a blockchain-based social network built on Dfinity’s Internet Computer protocol, has entered the race to build a scalable DeSo platform with $9 million in seed funding led by Polychain Capital. Other participants in the round include Upfront Ventures, Tomahawk VC, Fyrfly Venture Partners, Shima Capital and Bertelsmann Digital Media Investments (BDMI), according to the company. It’s a competitive space with plenty of startups and large companies racing to build a network that provides utility for its users. Earlier this month, ex-Coinbase employee Dan Romero secured $30 million led by a16z to develop Farcaster, a DeSo protocol that allows users to move their social identity across different apps. TechCrunch covered another seed-stage startup, Primitives, that raised a $4 million round in May for its own Solana-based DeSo network. Big tech is in the game, too — Twitter funds an offshoot of its service called BlueSky, an open-source DeSo project founded in 2019 that hasn’t gone live but is experimenting publicly with its development process.

7 ways to keep remote and hybrid teams connected

Marko Gargenta, CEO and founder of PlusPlus, a maker of internal training software that he founded after creating Twitter’s Twitter University, uses that idea to create company culture. It started at Twitter because he saw that some people had deep knowledge in topics that would benefit others. He started tapping them to give workshops and share that knowledge. Those 30-minute workshops were informal, in person, and wildly popular. “One in five engineers were regularly teaching classes,” he says. Those continued when the world went remote, but they shifted to canned videos. Those did not have the same impact. “People wanted human connection,” he says. “So, we started dialing the pendulum back toward live connection. Now they happen over Zoom but are very synchronous.” That has worked well. “If you look at ancient Greece,” says Gargenta, “Plato started The Academy. It was the place where people chasing ideas or mastery congregated, which created a sense of a culture. This pattern of people chasing mastery creates community. It’s what shaped ancient Greece, and all sorts of innovations came out of that.

Quote for the day:

"Leadership is a matter of having people look at you and gain confidence, seeing how you react. If you're in control, they're in control." -- Tom Laundry