Observers recently recentred the debate on a particular aspect of space security, namely anti-satellite (ASAT) technologies. The destruction of assets placed in outer space is high on the list of issues they identify as most pressing and requiring immediate action. As a result, some researchers and experts rolled out propositions to advance a transparent and cooperative approach, promoting the cessation of destructive operations in both outer space and launched from the ground. One approach was the development of ASAT Test Guidelines, first initiated in 2013 by a Group of Governmental Experts on Outer Space Transparency and Confidence-Building Measures. Another is through general calls to ban anti-satellite tests, to not only build a more comprehensive arms control regime for outer space and prevent the production of debris, but also reduce threats to space security and regulate destabilising force. Many space community members threw their support behind a letter urging the United Nations (UN) General Assembly to take up for consideration a kinetic anti-satellite (ASAT) Test Ban Treaty for maintaining safe access to Earth orbit and decreasing concerns about collisions and the proliferation of space debris.
Leveraging connections in data is a prominent way of getting value out of data. Graph is the best way of leveraging connections, and graph databases excel at this. Graph databases make expressing and querying connection easy and powerful. This is why graph databases are a good match in use cases that require leveraging connections in data: Anti-fraud, Recommendations, Customer 360 or Master Data Management. From operational applications to analytics, and from data integration to machine learning, graph gives you an edge. There is a difference between graph analytics and graph databases. Graph analytics can be performed on any back end, as they only require reading graph-shaped data. Graph databases are databases with the ability to fully support both read and write, utilizing a graph data model, API and query language. Graph databases have been around for a long time, but the attention they have been getting since 2017 is off the charts. AWS and Microsoft moving in the domain, with Neptune and Cosmos DB respectively, exposed graph databases to a wider audience.
So where will observability head in the next two to five years? Fong-Jones said the next step is to support developers in adding instrumentation to code, expressing a need to strike a balance between easy and out of the box and annotations and customizations per use case. Suereth said that the OpenTelemetry project is heading in the next five years toward being useful to app developers, where instrumentation can be particularly expensive. “Target devs to provide observability for operations instead of the opposite. That’s done through stability and protocols.” He said that right now observability right now, like with Prometheus, is much more focused on operations rather than developer languages. “I think we’re going to start to see applications providing observability as part of their own profile.” Suereth continued that the OpenTelemetry open source project has an objective to have an API with all the traces, logs and metrics with a single pull, but it’s still to be determined how much data should be attached to it.
Many scaling methods require knowledge of critical values within the feature distribution and can cause data leakage. For example, a min-max scaler should fit training data only rather than the entire data set. When the minimum or maximum is in the test set, you have reduced some data leakage into the prediction process. ... The one-dimensional frequency plot shown below each distribution provides understanding to the data. At first glance, this information looks redundant, but these directly address problems when representing data in histograms or as distributions. For example, when data is transformed into a histogram, the number of bins is specified. It is difficult to decipher any pattern with too many bins, and with too few bins, the data distribution is lost. Moreover, representing data as a distribution assumes the data is continuous. When data is not continuous, this may indicate an error in the data or an important detail about the feature. The one-dimensional frequency plots fill in the gaps where histograms fail.
Corporate skilling programs have been founded on frameworks and models from the world of academia. Even when we have moved to digital learning platforms, the core tenets of these programs tend to remain the same. There is a standard course with finite learning material, a uniformly structured progression to navigate the learning, and the exact same assessment tool to measure progress. This uniformity and standardization have been the only approach for organizations to skill their employees at scale. As a result, organizations made a trade-off; content-heavy learning solutions which focus on knowledge dissemination but offer no way to measure the benefit and are limited to vanity metrics have become the norm for training the workforce at large. On the other hand, one-on-one coaching programs that promise results are exclusive only to the top one or two percent of the workforce, usually reserved for high-performing or high-potential employees. This is because such programs have a clear, measurable, and direct impact on behavioral change and job performance.
The capability of governance across the whole SaaS estate is both nuanced and complicated. While the native security controls of SaaS apps are often robust, it falls on the responsibility of the organization to ensure that all configurations are properly set — from global settings, to every user role and privilege. It only takes one unknowing SaaS admin to change a setting or share the wrong report and confidential company data is exposed. The security team is burdened with knowing every app, user and configuration and ensuring they are all compliant with industry and company policy. Effective SSPM solutions come to answer these pains and provide full visibility into the company's SaaS security posture, checking for compliance with industry standards and company policy. Some solutions even offer the ability to remediate right from within the solution. As a result, an SSPM tool can significantly improve security-team efficiency and protect company data by automating the remediation of misconfigurations throughout the increasingly complex SaaS estate.
What happens if your cloud environment goes down due to challenges outside your control? If your answer is “Eek, I don’t want to think about that!” you’re not prepared enough. Disaster preparedness plans can include running your workload across multiple availability zones or regions, or even in a multicloud environment. Make sure you have stakeholders (and back-up stakeholders) assigned to any manual tasks, such as switching to backup instances or relaunching from a system restore point. Remember, don’t wait until you’re faced with a worst-case scenario to test your response. Set up drills and trial runs to make sure your ducks are quacking in a row. One thing you might not imagine the cloud being is … boring. Without cloud automation, there are a lot of manual and tedious tasks to complete, and if you have 100 VMs, they’ll require constant monitoring, configuration and management 100 times over. You’ll need to think about configuring VMs according to your business requirements, setting up virtual networks, adjusting for scale and even managing availability and performance.
Quote for the day:
"Leaders begin with a different question than others. Replacing who can I blame with how am I responsible?" -- Orrin Woodward