Hansson argues that the cloud at one point made sense for his business, but no longer does. “Yet by continuing to operate in the cloud, we're paying an at times almost absurd premium for the possibility that it could (be needed). It's like paying a quarter of your house's value for earthquake insurance when you don't live anywhere near a fault line,” Hansson wrote. “We're paying over half a million dollars per year for database (RDS) and search (ES) services from Amazon. Yes, when you're processing email for many tens of thousands of customers, there's a lot of data to analyze and store, but this still strikes me as rather absurd. Do you know how many insanely beefy servers you could purchase on a budget of half a million dollars per year?” He then addressed the “but you need to pay people to manage those servers” issue. “Anyone who thinks running a major service like HEY or Basecamp in the cloud is simple has clearly never tried," he said. "Some things are simpler, others more complex, but on the whole, I've yet to hear of organizations at our scale being able to materially shrink their operations team just because they moved to the cloud.”
Adopt and Demonstrate a Proactive Mindset - At first, this may seem like an obvious reiteration of an accepted business practice. However, organizations take this lightly far too often. A CEO’s direct involvement with cybersecurity practices must herald noticeable changes. This should be most evident in an organization's mindset towards implementing any proposed transformations. All policies enacted should reflect an active privacy and security governance model that adopts a proactive approach to resolving and mitigating all security challenges rather than relying on a reactive response. ... Conduct Rigorous Assessments - A critical practice that most organizations often shy away from is implementing a consistent assessment regime that thoroughly evaluates systems and mechanisms to ensure cybersecurity standards are up to par. Yes, it’s a monotonous job, which may be why most organizations often overlook the simple fact that it is not enough just to have sufficient measures and mechanisms in place. It is equally important to ensure that these measures are cross-checked and regularly run through assessments validating their effectiveness.
In the bad old days of on-premises data centers, if you bought a server, you owned it. No matter how generous the discount you negotiated with your hardware vendor, once they sold it to you, it really didn’t matter how little you made the CPU spin—they weren’t going to give you any money back. Fast forward to the days of cloud computing, by contrast, and it’s a fundamental principle that you pay for what you use. Use less, pay less. Does this mean enterprises may elect to use fewer cloud computing resources in a downturn? Of course it does. Is that a good thing? Absolutely. Why? Because it’s a customer-centric view rather than a vendor-centric view. Each of the cloud providers understands this, which is why their executives were united in praising, not lamenting, the ability of customers to spend less when times are hard. Alphabet/Google CEO Sundar Pichai introduced this theme, arguing that “the long-term trends that are driving cloud adoption continue to play an even stronger role during uncertain macroeconomic times.” Namely, cloud yields flexibility for enterprises to scale up or down based on their needs.
In addition to compromising MFA platforms and tricking employees into approving illegitimate access requests, attackers are also using adversary-in-the-middle attacks to bypass MFA authentication, according to a report released by Microsoft’s Threat Intelligence Center this summer. More than 10,000 organizations have been targeted by these attacks over the past year, which work by waiting for a user to successfully log into a system, then hijacking the ongoing session. “The most successful MFA cyber-attacks are based in social engineering, with all types of phishing being the most commonly used,” said Walt Greene, founder and CEO at consulting firm QDEx Labs. “These attacks, when carried out properly, have a fairly high probability of success to the unsuspecting user.” It’s clear that MFA alone is no longer enough and data center cybersecurity managers need to start planning ahead for a post-password security paradigm. Until then, additional security measures should be put in place to strengthen access controls and limit lateral motion through data center environments.
Web3 is all about leveraging assets — tokens or NFTs — to create systems of incentives to deliver products and services in ways that are more automated, trusted, and permission-minimized. You can’t have DeFi, identity solutions, or Decentralized Autonomous Organizations (DAOs) without assets that grant some form of rights or responsibilities when participating in a network. But building an asset in today’s Web3 is the same as setting up your own web infrastructure in the early 2000s; everyone is doing everything themselves. To catalyze Web3 adoption, developers must be able to leverage (and improve upon) the work others have done so far. Due to needing to copy-paste code, developers can’t easily reuse others’ code on-ledger. The result is redundant code clogging networks, leading to increased transaction costs and billions of dollars of security breaches. Then comes the aspect of composability, the feature that allows for interconnected decentralized applications and protocols.
Microlearning is a verifiable way to pick up new data science skills in less than 10 minutes per day. Developing this habit is a great way to keep you interested in advancing your skills as a data scientist by picking up new technologies or ways of doing things. Medium, Reddit, Substack, and various podcasts (see below) are great sources of information about new advances in data science that may inspire you to try learning something new. The key for adult learners is to keep the learning short and pointed toward a specific, tangible goal. This means keeping the learning to short 10-minute blocks with objectives that are easily achievable within that time. Not only does this keep you motivated to keep moving forward in your studies because of the short time they take to complete but they also ensure that you’re advancing your skills after a study session. Furthermore, it doesn’t seem like a hardship to complete a habit that takes less time than you need for a coffee break. In my experience, taking 10 minutes a day to work on a skill doesn’t provide huge gains immediately, but compounds slowly over time to produce something you can be proud of at the end of a year.
You need to measure the true outcomes of your security training and not just look at employee participation as a statistic. Consider the employee behaviors you’d like to see change as a result of the training, and then, determine if they actually do change Such behaviors include correctly classifying sensitive emails to be encrypted, following security warnings, not falling for phishing emails and avoiding general human errors. These can all be measured to determine if your training is truly having a positive effect. Rather than offer the same generic training to all employees, tailor your training to individuals based on history, needs, job role and other factors. You might start out by using security questionnaires to gauge the level of risk among different employees. Then, consider an employee’s job role and level of seniority to determine how likely they are to be targeted by cyberattacks. Next, assess the risk of an employee accidentally or intentionally causing a security incident over privileged data or sensitive systems.
Once you’ve identified a problem worthy of solving, the next step is to capture the data you need to solve it. If you’ve defined your problem well, you’ll know what that data is, which is key. Just as defining your problem narrows the variety of data you might capture, figuring out what data you need, where to get it, and how to manage it will narrow the vast catalog of people, processes, and technologies that could compose your data environment. Consider how this played out for Alina and ChampionX. Once the team knew the problem—site visits were costly—they quickly identified the logical solution: Reduce the number of required site visits. Most visits were routine, rather than in response to an active problem, so if ChampionX could glean what was happening at the site remotely, they could save considerable time, fuel, and money. That insight told them what data they would need, which in turn allowed ChampionX’s IT and Commercial Digital teams to discern who and what they needed to capture it. They needed IoT sensors, for example, to extract relevant data from the sites.
Much of the speed on the internet relies on smart caching policies. There's a drawback for federated architectures, though, which can run into legal and practical hassles with caching. A friend spent months redoing the checkout system for an online store where he worked. Credit card processors had rules against caching, which caused some of his biggest performance problems. Federated sites may be willing to share information one time, but they may also have strict rules about how much data you can retain from the interaction. Perhaps they’re worried about security, or they could be worried you’ll cache enough data that you won’t need them anymore. In any case, caching is often a hassle with federated sites. ... One way that sites try to simplify federated relationships is to store authorizations and keep them working for months or years. On one hand, users like saving the time it takes to reauthorize. On the other hand, they often forget that they’ve authorized some distant server, which can become a security hole.
“When we talk about IT bloat, we’re talking about IT service management spend on software or tools that you’re not getting the full value of,” explains Jenna Cline, head of IT strategy and planning at Atlassian. “We’re not talking about people but rather tools that were purchased that your team isn’t using to its full potential and doesn’t need.” She says the second important thing to consider is that bloat is relative. “We’re not seeing IT spend decreasing -- but in comparison to the year-over-year increasing budgets that we’ve seen for the past decade, stagnating IT budgets may feel like a decrease,” she says. To measure whether a tool is providing maximum value to your team, Cline says she likes to look at four key categories: usage, time to value, total cost of ownership, and growth. For usage, it's important to consider whether the applications, licenses, and services that the organization has invested are being used, and if they are a “right-sized fit” for the firm. “This means you must know if you have a plan to use all these features or access at the level of investment we’ve engaged at,” Cline says.
Quote for the day:
"Leadership is, among other things, the ability to inflict pain and get away with it - short-term pain for long-term gain." -- George Will