While threat actors may use methods to actively infiltrate a company’s defences, sometimes the vulnerabilities are already there. CSPs are usually quick to patch known vulnerabilities without requiring customer interaction. However, when cloud services involve the customer in managing the software, oversight can be tricky due to the complexity of the environment. Businesses should prioritise regular scanning and patching of known vulnerabilities with the latest version of each type of software they’re running on their system. On top of this, IT leaders should maintain an up-to-date inventory of assets to ensure visibility of all endpoints that require patching. In the rush to gain a competitive advantage, cloud environments are evolving rapidly with organisations using a hybrid or multi-cloud approach. This complexity can lead to misconfiguration if set up incorrectly. Misconfigured cloud infrastructures can expose data or resources to the public internet, and failure to implement encryption or multi-factor authentication can allow actors to access cloud-related tools, data, assets, or systems.
COVID-19 shut down businesses, supply chains and entire countries, and changed the way companies buy, sell and work. But other events like trade wars, legislation, and regulations can all impact technology providers. It’s not about predicting these events as much as identifying existing trends (i.e., work from home, e-commerce) that will accelerate if these occur. Customers demand products and services that meet their particular business or IT needs. In a broader sense, customer demand and expectations are shaped by cultural changes and world events (e.g., stay-at-home orders increasing demand for distributed work tools). User experiences and trends like mobility or subscription and freemium pricing, which were made popular in consumer markets, have led IT customers to want the same benefits from technology providers. Emerging technologies may seem like novelties when they first appear, but when these technologies become a trend, they can profoundly shape buying and selling behavior and enable new business models. Over the next several years, today’s immature technologies and “weak signals” have the potential to disrupt what your product does, who it serves and how you deliver it.
Fast data enables full-circle delivery of data that is “in motion.” In other words, it’s generated and consumed instantly by interactive applications running on large numbers of devices. Fast data enables organizations to act on insights gained from user interactions as these insights are generated at the point of the interaction. And because decisions or actions take place right at the front-end, fast data architectures are, by definition, distributed and real-time. Big data is focused on capturing data, storing it, and processing it periodically in batches. A fast data architecture, on the other hand, processes events in real time. Big data focuses on volume, while with fast data, the emphasis is on velocity. Here’s an example. A credit card company might want to create credit risk models based on demographic data. That’s a big data challenge. A fast data architecture would be required if that credit card company wants to send fraud alerts to customers in real-time, when a suspicious activity occurs in their accounts. Think of FedEx. To track millions of packages and ensure on-time and accurate delivery across the planet, FedEx needs access to the right real-time data to perform real-time analysis and deliver the right interaction—right away, right there, not a day later.
When you figure out that something “bad” happen to your program, you know this should not happen and you know there is no way around, the way to throw and exception higher can be via an assert. This will throw an exception to the higher up program which will have to decide what to do with it. An example would be you expect an int as input and you get a string, this is contract breaking, the higher up program ask you to handle improper data, why would your program decide why the contract was broken? It should be the responsibility of the caller to handle that exception properly. That might warrant a assert right there. Depending on the organization you work in, when an how to use exceptions and asserts might get philosophical. On the other hand it could also be subject to very specific rules. There might be really valid reason why an organization might prefer an approach over another. Learn the rules, and if there is no rule, have discussion around it and apply your best judgement. In any case, dead programs tells no lies. Better kill it than having to deal with polluted data a year in the future.
Social engineering is a collective term for ways in which fraudsters manipulate people into performing certain actions. It’s generally used in an information security context to refer to the tactics crooks use to trick people into handing over sensitive information or exposing their devices to malware. This often comes in the form of phishing scams – messages supposedly from a legitimate sender that ask the recipient to download an attachment or follow a link that directs them to a bogus website. However, social engineering isn’t always malicious. For example, say you need someone to do you a favour, but you’re unsure that they’ll agree if you ask them apropos of nothing. You might grease the wheels by offering to do something for them first, making them feel obliged to say yes when you ask them to return the favour. That’s a form of social engineering. You’re performing an action that will compel the person to do something that will benefit you. Understanding social engineering in this context helps you see that social engineering isn’t simply an IT problem. It’s a vulnerability in the way we make decisions and perceive others – something we delve into more in the next section.
Before changing your setup, you should also consider your ISP package. If you're subscribed to a low-speed offering, new equipment is not going to necessarily help. Instead, package upgrades could be a better option. If you are a sole user and need a stable, powerful connection -- such as for resource-hungry work applications or gaming -- a traditional router may be all you need. Wired should be quicker than wireless, and so investment in a simple Ethernet cable, easily picked up for $10 to $15, could be enough. Wi-Fi range extenders, too, could be considered as an alternative to mesh if you just need to boost coverage in some areas, and will likely be less expensive than purchasing individual mesh nodes. Some vendors also offer mesh 'bolt-ons' such as Asus' AiMesh, which can connect up existing routers to create a mesh-like coverage network without ripping everything out and starting again. However, mesh networking is here to stay and at a time when many of us are now in the home rather than traditional home offices, a mesh setup could be a future-proof investment.
Transparency and security assurance are important, but the Intel report also reveals other factors that businesses consider for endpoint and network infrastructure purchasing decisions. Interoperability with existing tools and platforms ranked highest with 63%, followed by installation cost (58%), system complexity (57%), vendor support (55%), and scalability issues (53%). One area that is particularly interesting is the intersection of hardware and software and how they can work together to solve cybersecurity problems in innovative ways. More than three-fourths of the survey participants indicated that it is highly important for technology providers to offer hardware assisted capabilities to mitigate software exploits. More than 70% also noted that it is important for technology providers to offer mechanisms and security controls to protect distributed workloads. Suzy Greenberg, Vice President of Intel Product Assurance and Security for Intel, joined me recently on the TechSpective Podcast to talk about this report and some of the insights and trends she finds interesting.
The impact could be fairly wide: “As of Kubernetes v1.20, Docker is deprecated and the only container engines supported are CRI-O and Containerd,” Sasson explained. “This leads to a situation in which many clusters use CRI-O and are vulnerable. In an attack scenario, an adversary may pull a malicious image to multiple different nodes, crashing all of them and breaking the cluster without leaving a way to fix the issue other than restarting the nodes.” When a container engine pulls an image from a registry, it first downloads its manifest, which has the instructions on how to build the image. Part of that is a list of layers that compose the container file system, which the container engine reads and then downloads and decompresses each layer. “An adversary could upload to the registry a malicious layer that aims to exploit the vulnerability and then upload an image that uses numerous layers, including the malicious layer, and by that create a malicious image,” Sasson explained. “Then, when the victim pulls the image from the registry, it will download the malicious layer in that process and the vulnerability will be exploited.” Once the container engine starts downloading the malicious layer, the end result is a deadlock.
NFTs have seen massive increases in trade volume and users in recent times. Investments in NFTs rose by 299% across 2020, and the NFT market’s sales volume grew by 2,882% in February alone. This increased interest resulted in part of the improving infrastructure surrounding NFTs, which has supported full stack services from trading venues, minting platforms, marketplaces and more. While detractors have suggested that the current crest of interest represents a bubble, experts have pointed to the fact that technology of NFTs is strong enough to survive a possible crash, and is expected to be around for quite some time. According to Beeple, a digital artist who recently made almost $70 million from his NFT sale, the technology will support any work or piece of real value. Similarly, the new owners of Beeple’s record setting piece of artwork, Vignesh (Metakovan) and Anand (Twobadour) believe their transaction represents a paradigm shift in how the world perceives art. They see NFTs as having an equalizing effect between the traditionally dominant West and the global South.
One of the main applications for machine learning is defect detection and classification. The first step is using machine learning to detect actual defects and ignore noise. We are seeing many examples where machine learning is much better at extracting the actual killer defect signal from a noisy background of process and pattern variations. The second step is to leverage machine learning to classify defects. The challenge these days is that when optical inspectors run at high sensitivity to capture the most subtle, critical defects on the most critical layers, other anomalies are also detected. Machine learning is first applied to the inspection results to optimize the defect sample plan sent for review. Then, high-resolution SEM images are taken of those sites and additional machine learning is used to analyze and classify the defects to provide fab engineers with accurate information about the defect population – actionable data to drive process decisions. An emerging application is to make use of machine learning to be more predictive about where to inspect and measure.
Quote for the day:
"Real leaders are ordinary people with extraordinary determinations." -- John Seaman Garns