Awareness and funding do not translate into preparedness: although 75% of those surveyed feel their board understands their organization’s systemic risk, 76% think they have invested adequately in cybersecurity, 75% believe their data is adequately protected, and 76% discuss cybersecurity at least monthly, these efforts appear insufficient—47% still view their organization as unprepared to cope with a cyber attack in the next 12 months. Board members disagree with CISOs about the most important consequences of a cyber incident: internal data becoming public is at the top of the list of concerns for boards (37%), followed closely by reputational damage (34%) and revenue loss (33%). These concerns are in sharp contrast with those of CISOs, who are more worried about significant downtime, disruption of operations, and impact on business valuations. High employee awareness doesn’t protect against human error: although 76% of those surveyed believe their employees understand their role in protecting the organization against threats, 67% of board members believe human error is their biggest cyber vulnerability.
Reducing the cognitive pressure on development teams enables them to focus more readily on the core business code. Majors feels that "the more swiftly and easily developers can move, the better your platform team". In a recent Twitter thread, Majors elaborated on the relationship platform teams have with infrastructure and business code: Platform teams uniquely sit between these two tectonic plates -- infra code and business code, each moving at different speeds -- allowing other engineers to completely abstract infrastructure away. Majors draws a clear line between DevOps and platform engineering in stating "DevOps is about automation and managing infrastructure. Platform is about not having infra to run." This definition aligns to another statement made by Majors in that platform teams should focus on paying other people to run infrastructure, and conserve their development cycles for the development platform. Majors states that the goal of the platform team is to "run less software".
Thermal attacks can occur after users type their passcode on a computer keyboard, smartphone screen or ATM keypad before leaving the device unguarded. A passer-by equipped with a thermal camera can take a picture that reveals where their fingers have touched the device. The brighter an area appears in the thermal image, the more recently it was touched and therefore the order sequence can be estimated. Previous research by Dr Mohamed Khamis, who led the development of the system, found that ThermoSecure could reveal 86 per cent of passwords when thermal images are taken within 20 seconds, dropping to 62 per cent after 60 seconds. They also found that within 20 seconds, ThermoSecure was capable of successfully guessing 67 per cent of long 16-character passwords. As passwords grew shorter, success rates increased – 93 per cent of eight-symbol passwords were cracked and all six-symbol passwords were successfully guessed. Another aspect which made it easier for ThermoSecure to guess passwords was the typing style of the keyboard users.
“The Digital Services Act is one of the EU’s most ground-breaking horizontal regulations and I am convinced it has the potential to become the ‘gold standard’ for other regulators in the world,” said Jozef Síkela, minister for industry and trade. “By setting new standards for a safer and more accountable online environment, the DSA marks the beginning of a new relationship between online platforms and users and regulators in the European Union and beyond.” Under the DSA, providers of intermediary services – including social media, online marketplaces, very large online platforms (VLOPs) and very large online search engines (VLOSEs) – will be forced into greater transparency, and will also be held accountable for their role in disseminating illegal and harmful content online. For example, the DSA will prohibit platforms from using targeted advertising based on the use of minors’ personal data; impose limits on the use of sensitive personal data for targeted advertising, including gender, race and religion; and introduce obligations on firms to react quickly to illegal content.
Even though software engineers like to have a sense of ownership, we shouldn’t discourage flexibility—people easily become bored working on the same thing for years and years. There’s also the fallacy of sunk cost to keep in mind, which states that we tend to value things more because we’ve put more time and effort into them. Thus, providing flexibility to pivot when it makes sense can increase overall satisfaction and output. Accordingly, flexible management is also crucial to embrace pivots when they are necessary. For example, if a project is well underway but an engineer identifies a new solution that is more elegant, team leads should be open to recognizing and acting on changes. But to realize this sort of relationship, trust and openness must be bidirectional, said Sutter. If engineers can’t express their ideas or are afraid to tell their boss they’re wrong, these important conversations can’t happen. A flexible structure is also necessary to attract talent that prefers more modern work-life balance.
To handle the many mechanisms and services newer applications used or offered, they were broken down into their own microlevel apps: microservices. Pulling all the components out of a monolith so each one could run more efficiently on its own obviously required a complex architecture to make them work together. Cloud native DevOps truncated the development cycle rather organically. Past monolith environments made replicating things in testing pretty simple. But with the cloud, there are too many moving parts. Each cog and gear — an instance, a container, the second deployment of some app — has its own configuration. Add in the exact conditions affecting some individual user experience or availability of some cloud resource, and you have rather irreplicable sets of conditions. Hence, devs need to anticipate more and more issues before full deployment, especially if they’re spinning out the process to another “as a service” provider (serverless in particular). If they don’t do this, late-stage troubleshooting will become overwhelming.
The most effective approach to effective multi-cloud budgeting is to partner across your organization to understand workload plans, specifically regarding the cloud provider of choice, says A.J. Wasserman, product owner, Cloud FinOps, with Liberty Mutual Insurance. “This will provide a solid baseline for forecasting, which can then be used to drive budgeting,” she explains. “As you go through this process, it's important to attempt to segment the budget by cloud provider to understand how your actuals are tracking compared to the original budget.” The best approach to multi-cloud budgeting is to focus on a multi-year plan versus an annual budget to allow for both tactical and strategic considerations, Hoecker advises. Looking beyond budgeting and into financial operations, it's important to define a common tagging approach that can be applied consistently across clouds. This will enable common views, as well as the ability to compare cloud consumption and costs between cloud service providers, Potter says. “Cloud FinOps solutions can help provide real-time insight into cloud spend versus budgets, and alert relevant stakeholders early if costs are exceeding expectations,” he notes.
Attackers are improving too because of the effort that cyberattackers make in collecting intel for targeting victims with social engineering. For one, they're utilizing the vast amounts of information that can be harvested online, says Jon Clay, vice president of threat intelligence for cybersecurity firm Trend Micro. "The actors investigate their victims using open source intelligence to obtain lots of information about their victim [and] craft very realistic phishing emails to get them to click a URL, open an attachment, or simply do what the email tells them to do, like in the case of business e-mail compromise (BEC) attacks," he says. The data suggests that attackers are also getting better at analyzing defensive technologies and determining their limitations. To get around systems that detect malicious URLs, for example, cybercriminals are increasingly using dynamic websites that may appear legitimate when an email is sent at 2 a.m., for example, but will present a different site at 8 a.m., when the worker opens the message.
Process mapping helps businesses and companies to be more efficient by providing insight into the processes of that business or company. Process mapping helps to identify bottlenecks, repetitions, and delays in the flow of a process, as well as helping to identify boundaries, responsibilities, effectiveness metrics, and set a schedule baseline. When mapping a process, you identify each step, draw each step using the appropriate shape or symbol, and show the flow by drawing arrows to connect the steps. This can be done by hand or using process mapping software. ... There are two ways process mapping can help software testers in coding and debugging: process mapping for debugging and process mapping for control flow and statistical analysis. Every software developer can tell you about the drudgery of debugging a piece of software. Developers can spend hours combing through code trying to find the piece that is generating an error or incorrect output.
Quote for the day:
"A throne is only a bench covered with velvet." -- Napoleon Bonaparte