A solid program helps to demystify cybersecurity for non-security employees and bring it into the scope of their own specialties. “People often think cybersecurity is an unfathomable, deeply technical concept, but having security champions creates an open dialogue, and couching security issues and threats in ways that are relevant to specific roles is the best way to navigate this issue,” Dr. John Blythe, chartered psychologist and director of cyber workforce psychology at cybersecurity training firm Immersive Labs, tells CSO. “Engineers need to understand how to write secure code, executive teams need to know how to respond better in a cyber crisis, and IT teams need to know how to secure cloud infrastructure.” Once trained, cybersecurity champions can identify and address cybersecurity vulnerabilities before they become widespread and problematic, adds Dominic Grunden, CISO of UnionDigital Bank. “In doing so, they can save organizations significant time and money in the process,” he says. Grunden has integrated and run cybersecurity champions programs across multiple industries and organizations.
Charming Kitten is now using what researchers have dubbed PowerLess Backdoor, a previously undocumented PowerShell trojan that supports downloading additional payloads, such as a keylogger and an info stealer. The team also discovered a unique new PowerShell execution process related to the backdoor aimed at slipping past security-detection products, Frank wrote. “The PowerShell code runs in the context of a .NET application, thus not launching ‘powershell.exe’ which enables it to evade security products,” he wrote. Overall, the new tools show Charming Kitten developing more “modular, multi-staged malware” with payload-delivery aimed at “both stealth and efficacy,” Frank noted. The group also is leaning heavily on open-source tools such as cryptography libraries, weaponizing them for payloads and communication encryption, he said. This reliance on open-source tools demonstrates that the APT’s developers likely lack “specialization in any specific coding language” and possess “intermediate coding skills,” Frank observed.
No one wants someone else to find the fault in their code, but at the same time, developers aren’t crazy about running tests, whether that means writing tests or doing manual tests. But developers shouldn’t have to ship their code off to a testing environment to discover if it works or not and worry about the embarrassment of having a colleague find an error. Robust change validation does just that — it allows developers to spot potential errors before the application or update leaves the local environment. Validating changes involves not only testing that the code is good, but also that it works as expected with all dependencies. When a change isn’t validated, the developer knows immediately and can fix the problem and try again immediately. And when a change is validated, the developer can push the code into the production-like environment with confidence, knowing that it’s been tested against upstream and downstream dependencies and won’t cause anything to break unexpectedly. The idea is that as much hardening as possible happens in the development environment so that developers can ship with maximum confidence.
Everything was fair game and everything was abused as we had a very limited set of tools with the demand to do a lot. But things had changed dramatically by the time I joined Yahoo!. Devs from the U.K. were strong supporters of Web Standards and I credit them for greatly influencing how HTML and CSS were written at Yahoo!. Semantic markup was a reality and CSS was written following the Separation of Concern (SoC) principle to the “T”. YUI had CSS components but did not have a CSS framework yet. ... Every Yahoo! design team had their own view of what was the perfect font size, the perfect margin, etc., and we were constantly receiving requests to add very specific styles to the library. That situation was unmaintainable so we decided to come up with a tool that would let developers create their own styles on the fly, while respecting the Atomic nature of the authoring method. And that’s how Atomizer was born. We stopped worrying about adding styles — CSS declarations — and instead focused on creating a rich vocabulary to give developers a wide array of styling, like media queries, descendant selectors, and pseudo-classes, among other things.
Cyber Signals aggregates insights we see from our research and security teams on the frontlines. This includes analysis from our 24 trillion security signals combined with intelligence we track by monitoring more than 40 nation-state groups and over 140 threat groups. In our first edition, we unpack the topic of identity. Our identities are made up of everything we say and do in our lives, recorded as data that spans across a sea of apps and services. While this delivers great utility, if we don’t maintain good security hygiene our identities are at risk. And over the last year, we have seen identity become the battleground for security. While threats have been rising fast over the past two years, there has been low adoption of strong identity authentication, such as multifactor authentication (MFA) and passwordless solutions. For example, our research shows that across industries, only 22 percent of customers using Microsoft Azure Active Directory (Azure AD), Microsoft’s Cloud Identity Solution, have implemented strong identity authentication protection as of December 2021.
In the second part of the study, the team calculated the number of physical qubits needed to break the encryption used for Bitcoin transactions. Marek Narozniak, a physicist at New York University (NYU) in the US who was not part of the study, points out that this question – whether cryptocurrencies are safe against quantum computer attacks – comes with additional constraints not present in the FeMo-co simulation. While a 10-day computation time may be acceptable for FeMo-co simulations, Narozniak notes that the Bitcoin network is set up so that a hacker armed with an error-correcting quantum computer would have a very limited time to decrypt information and steal funds. According to Webber and collaborators, breaking Bitcoin encryption within one hour – a time window within which transactions may be vulnerable – would take about three hundred million qubits. Based on this result, Narozniak concludes that “Bitcoin is pretty safe”, although he warns that not all cryptocurrencies operate the same way. “There are other cryptocurrencies that work differently, and they have different algorithms that could be more vulnerable,” he says.
Intrinsic protocol risk in DeFi comes in all shapes. In DeFi lending protocols such as Compound or Aave, liquidations is a mechanism that maintains lending markets collateralization at appropriate levels. Liquidations allow participants to take part of the principal in uncollateralized positions. Slippage is another condition present in automated market making (AMM) protocols such as Curve. High slippage conditions in Curve pools can force investors to pay extremely high fees to remove liquidity supplied to a protocol. ... A unique aspect of DeFi, decentralized governance proposals control the behavior of a DeFi protocol and, quite often, are the cause of changes in its liquidity composition in affecting investors. For instance, governance proposals that alter weights in AMM pools or collateralization ratios in lending protocols typically help liquidity flow in or out of the protocol. A more concerning aspect of DeFi governance from the risk perspective is the increasing centralization of the governance structure of many DeFi protocols. Even though DeFi governance models are architecturally decentralized, many of them are controlled by a small number of parties that can influence the outcome of any proposal.
Because npm packages in general are being downloaded upwards of 20 billion times a week—and thus installed across countless web-facing components of software and applications across the world–exploiting them means a sizeable playing field for attackers, researchers said in their Wednesday report. ... That level of activity enables threat actors to launch a number of software supply-chain attacks, researchers said. Accordingly, WhiteSource investigated malicious activity in npm, identifying more than 1,300 malicious packages in 2021 — which were subsequently removed, but may have been brought into any number of applications before they were taken down. “Attackers are focusing more efforts on using npm for their own nefarious purposes and targeting the software supply chain using npm,” they wrote in the report. “In these supply-chain attacks, adversaries are shifting their attacks upstream by infecting existing components that are distributed downstream and installed potentially millions of times.”
While the metaverse may be a golden ticket of new opportunities for retailers, there are very real security risks that also need to be considered. For example, in this new, yet to be explored metaverse, there’s the danger of consumers coming across a virtual man wearing a trench coat selling fake designer watches. How can consumers tell if he is selling the genuine article, or fakes? Ensuring your brand and products are protected in the metaverse is going to be a serious challenge. Monitoring for intellectual property (IP) infringement across the various components of the metaverse will not be easy. If consumers have bad experiences in the metaverse, such as buying fake goods, this can cause great harm to their reputation online and in the real world too. This is why it is important that they take steps to protect their brand and IP as well as their customers. It has been suggested that operators within the metaverse should establish strategies for protecting users’ IP, in the same vein to how YouTube, eBay and Amazon work to protect rights holders from illicit activity on their platforms.
The best way to leverage SD-WAN’s potential for reducing network transport costs is to have a portfolio strategy with respect to selecting internet transport suppliers. Through robust acquisition processes and detailed coverage and cost modeling, enterprises need to come up with a manageable number of vetted suppliers that meet their global transport requirements. Potential suppliers should be evaluated based on provisioning, SLAs, operational and service support, and commercial and contractual terms, for example. Typically, for larger businesses, a portfolio approach may include one global or strong regional aggregator alongside two-to-five other telecom service providers with different provisioning models. Having a small handful of suppliers as go-to partners rather than a single supplier can yield significant cost savings – and those savings can run 30% higher compared to a single-source supplier approach. If managed correctly, a portfolio approach also offers a way to keep performance/pricing pressure on competing suppliers.
Quote for the day:
"Don't focus so much on who is following you, that you forget to lead." -- E'yen A. Gardner