Employers increasingly expect to see hands-on experience, says Keatron Evans, principal security researcher at security education provider InfoSec. “Have you done packet capture analysis? Can you understand and parse logs or done incident response in the cloud? It’s important to have that kind of demonstrable hands-on experience verbalized in a resume,” he says. The expectation is high because even if you haven’t held a security analyst job, hands-on experience can be acquired in other ways today, such as training exercises offered by companies like InfoSec, Immersive Labs, and Pluralsight. “Before, training was mostly certificate-driven—it wasn’t geared toward proving you can do these things,” Evans says. “Now there’s simulation in the training environment, which is turning into a good gateway to get your foot in the door.” If candidates can send a five-minute screen capture of themselves performing a task, “it’s worth more than a thousand words,” Evans says. Capture-the-flag (CTF) events are another highlight to include. If you’ve placed well in a well-known CTF or completed a penetration test, put that at the top of the resume as well, he says.
Organizations the world over are indeed struggling with unnecessarily complicated multi-cloud environments. Validating the global struggle, Enterprise Strategy Group recently conducted a global survey of 1,257 IT decision makers at enterprise and midmarket organizations using both public cloud infrastructure and modern on-premises private cloud environments. The results hit home, and really do cement the notion that this cloud fragmentation is getting worse as time goes on - and that there are many companies out there who are seeking a ‘savior’ toolset to get a zoomed-out view of policies, compliance, security and cost optimization. An unsurprising outcome of the survey is that there is a clear value in cloud management - yet even knowing said value, organizations are struggling with implementation. A mere 5% used consolidated cloud management tools extensively on premise, or across public and/or private cloud. This despite a burgeoning marketplace of all in one solutions like VMWare VRealize Suite, Flexera CMP, Cloudbolt, and others.
Distributed SQL is a relational database win-win. The technology’s innovations are based on lessons learned over the past thirty or so years to deliver true dynamic elasticity. The modern benefits of dynamic elasticity include the ability to add or remove nodes simply, quickly, and on-demand. The approach is self-managing, able to automatically rebalance nodes or rebalance data within those nodes while maintaining extremely high continuous availability (i.e., automatic failover). And of course, the approach includes all of the features that make relational databases so powerful, like the ability to use standard SQL (including JOINs) and to maintain ACID compliance. A distributed SQL option like MariaDB’s Xpand is architected for all nodes to work together to form a single logical and distributed database that all applications can point to, regardless of the intended use case. Whether a business needs a three-node cluster for modest workloads or hundreds, even thousands, of nodes for unlimited scalability, distributed SQL means deployments can grow or shrink on demand.
To elaborate on this point more, If we think about it, we have had mainframe computers that evolved to personal computers, which then evolved to mobile devices. In the case of the metaverse however, the leap or the iteration does not necessarily go to a faster device. Instead, to virtual simulations of virtual worlds and virtual environments where through VR and AR, we can finally make it possible to buy things in the real world through these environments. But, in particular, also to buy things that just happen to exist in these virtual environments. Connecting computers to the internet has catapulted them to mainstream use and marked the beginning of the dot-com era, and the last 15 years were clearly shaped by the mobile phone, which led to full mass adoption. This makes the metaverse a very practical concept. It's not just VR concerts, 3D gatherings or digital assets, the metaverse is an idea that brings these concepts together and tries to explain how they are all connected. Matthew Ball, an outspoken proponent of the metaverse, has outlined a few key ideas that show what this evolved form of the internet will look like.
My other concern is that the number of tools you can use might be considered more important than your data science knowledge. This situation leads to data scientists being evaluated based on tool knowledge, not science. If this happens, it will be a serious problem. Software tools are just there for turning ideas into action or value. The ideas come from data scientists who blend analytical thinking, creativity, statistics, and theory. If data scientists are forced to learn as many tools as possible, they might miss the point. They will get very quick at performing tasks thanks to the highly advanced tools. However, this is not enough for creating value out of data. What leads to creating value is first to define a problem that can be solved with data. Once a problem is defined and a solution is designed, the tools are then needed to do the tasks. I think you would agree that without a problem and solution, there is no use for advanced software tools. To sum up, we definitely need software tools and packages to perform data science. They enable us to work with large amounts of data quickly and efficiently.
Targeting of Alibaba is on the rise thanks to a few unique features of the service, researchers noted, and the way that cloud instances can be configured. “The default Alibaba ECS instance provides root access,” according to the analysis. “With Alibaba, all users have the option to give a password straight to the root user inside the virtual machine (VM).” This is in contrast to how other cloud service providers architect their storage access, researchers pointed out. In most cases, the principle of least privilege is front and center, with different options such as not allowing Secure Shell (SSH) authentication over user and password, or allowing asymmetric cryptography authentication. That way, if cyberattackers gain credentials, entering with only low-privilege access would require them to make an “enhanced effort” to escalate the privileges, according to Trend Micro: “Other cloud service providers do not allow the user to log in via SSH directly by default, so a less privileged user is required.”
Insight might ultimately stem from data, but that doesn’t mean data – or even ‘intelligent’ data – should be conflated with insight. They aren’t the same thing. That’s not to say that adding intelligence to data isn’t important, of course. After all, raw and unprocessed data can only gain value once we’ve added intelligence – once we have, in other words, annotated and quantified that data. This principle is, of course, magnified in the context of what’s known as ‘big data’, which I generally take to mean datasets that must be measured in terms of gigabytes and exabytes. Through the use of machine learning, it’s possible to add intelligence to this kind of massive dataset through annotating it with sentiment, emotions, topics, and other useful variables. ... As we ascend our knowledge pyramid, it’s easy to see how one might confuse actionable information with true insight. Nonetheless, there is an important distinction between the two terms, especially when it comes to their respective relationships to long-term strategy.
A CIAM system typically resides in the cloud and operates under a software-as-a-service (SaaS) model. It relies on built-in connectors and APIs to tie together various enterprise applications, systems, and data repositories. This makes it possible to combine features, including customer registration, account management, directory services, and authentication. When a customer visits a website or calls in, for example, the CIAM solution handles the authentication process (using a password, single sign-on, biometrics, or multiple factors, for example). It’s also adept at juggling different protocols, including SAML, OpenID Connect, OAuth and FIDO. Once a customer signs in, it’s possible to place an order, track delivery, update a user profile, and handle other account-related tasks. Another benefit of CIAM is that it delivers risk-based authentication (RBA), which is sometimes referred to as adaptive authentication. This means that a system can look for signs and signals -- such as a user’s IP address, User-Agent HTTP headers, the date and time of access, and other factors
The development is concerning but not surprising for those fighting large-scale botnets. Emotet and Trickbot were essentially run by different departments of one cybercriminal organization that's based in Russia, says Alex Holden, CISO of Hold Security, a Wisconsin-based security consultancy that studies the cybercriminal underground. Researchers have long noticed close associations, with Emotet distributing Trickbot and vice versa. Both have been linked to distribution of ransomware including Ryuk and Conti. "We knew that it [Emotet] would come back," Holden says. "It was a matter of time. But it may signal more battles are ahead. Emotet was the "biggest and baddest" botnet before it was taken down, says James Shank, senior security evangelist and chief architect, Community Services with Team Cymru. A new version of Emotet is being distributed by Trickbot, says Marcus Hutchins, a malware researcher with Kryptos Logic who is also part of Cryptolaemus, a notable group of top security researchers and systems administrators dedicated to fighting Emotet. Emotet's return will likely will eventually mean greater distribution of ransomware.
Human-in-the-loop systems are essentially about providing this context to AI models. This context could be in various terms. It includes removing bias from models so they adhere to ethical standards, providing situational awareness information to improve predictions or dispensing a final oversight before a decision is made. Context is also the AI system providing context to the human being for further action. When this critical piece of information is missing, it leads to what is popularly known as “the black box problem” where the users don’t really understand how the model has churned data and arrived at a decision. Given the fact that algorithms are driving parts of our lives now with their use in driving cars, giving product recommendations, making investment decisions and even predicting employee attrition, it is becoming integral for stakeholders to understand and trust these AI operations. The key to designing a successful human-in-the-loop system is solving this challenge of two-way communication of context.
Quote for the day:
"A lot of people have gone farther than they thought they could because someone else thought they could." -- Zig Zigler