Enterprises have responded to growing storage demands by moving to larger, scale-out NAS systems. The on-premise market here is well served, with suppliers Dell EMC, NetApp, Hitachi, HPE and IBM all offering large-capacity NAS technology with different combinations of cost and performance. Generally, applications that require low latency – media streaming or, more recently, training AI systems – are well served by flash-based NAS hardware from the traditional suppliers. But for very large datasets, and the need to ease movement between on-premise and cloud systems, suppliers are now offering local versions of object storage. The large cloud “superscalers” even offer on-premise, object-based technology so that firms can take advantage of object’s global namespace and data protection features, with the security and performance benefits of local storage. However, as SNIA warns, these systems typically lack interoperability between suppliers. The main benefits of on-premise storage for unstructured data are performance, security, plus compliance and control – firms know their storage architecture, and can manage it in a granular way.
Eventually CXL it is expected to be an all-encompassing cache-coherent interface for connecting any number of CPUs, memory, process accelerators (notably FPGAs and GPUs), and other peripherals. The CXL 3.0 spec, announced last week at the Flash Memory Summit (FMS), takes that disaggregation even further by allowing other parts of the architecture—processors, storage, networking, and other accelerators—to be pooled and addressed dynamically by multiple hosts and accelerators just like the memory in 2.0. The 3.0 spec also provides for direct peer-to-peer communications over a switch or even across switch fabric, so two GPUs could theoretically talk to one another without using the network or getting the host CPU and memory involved. Kurt Lender, co-chair of the CXL marketing work group and a senior ecosystem manager at Intel, said, “It’s going to be basically everywhere. It’s not just IT guys who are embracing it. Everyone’s embracing it. So this is going to become a standard feature in every new server in the next few years.” So how will the application run in enterprise data centers benefit?
Whatever your organization’s preference for team building, it should be carefully selected from a range of options, and it should be clear to everyone why the firm chose one particular structure over another and what’s expected of everyone participating. Start with desired outcomes and cultural norms, then articulate principles to empower action, and, finally, provide the skills and tools needed for success. ... Even in the most forward-thinking organizations, people want to know what a meeting is supposed to achieve, what their role is in that meeting, and if gathering people around a table or their screens is the most effective and efficient way to get to the desired outcome. Is there a decision to be made? Or is the purpose information sharing? Have people been given the chance to opt out if the above points are not clear? Asking these questions can serve as a rapid diagnostic for what you are getting right—and wrong—in your meetings. Poorly run meetings sap energy and breed mediocrity.
That’s not to say that meetings aren’t important, but it makes sense for managers to find the right balance for their teams, said Dan Kador, vice president of engineering at Clockwise. “It's something that companies have to pay attention to and try to understand their meeting culture — what's working and what's not working for them." “It is important that teams get together to discuss things and make sure they are all on the same page, but often meetings are scheduled at regular intervals even if they aren’t necessary,” said Jack Gold Principal analyst & founder at J. Gold Associates. “We are all subjected to weekly meetings, or other intervals, where, even if there is nothing to discuss, the meeting takes place anyway. And some meeting organizers feel obligated to use up the entire scheduled time.” Of course, meeting overload is not just an issue for those writing code. “Too much time spent in meetings is not just a problem for developers,” said Gold. “It is a problem across the board for employees in many companies.”
To counter the threat of e-commerce skimming, the card companies are using the two tools they have in their arsenal again: by making stolen data worthless and by creating new technical security standards. To make stolen payment card data worthless, there’s a chip-equivalent technology for e-commerce called 3-D-Secure v2, which has already been rolled out in the EU. This technology requires something more than just the knowledge of the numbers printed on a payment card to make an online transaction. After entering their payment card data, the consumer may have to further confirm a purchase using a bank’s smartphone app or by entering a code received by SMS. Alongside this re-engineering of the payment system, the latest version of the Payment Card Industry Data Security Standard (PCI DSS) includes new technical requirements to prevent and detect e-commerce skimming attacks. PCI DSS applies to all entities involved in the payment ecosystem, including retailers, payment processors and financial institutions. Firstly, website operators will need to maintain an inventory of all the scripts included in their website and determine why the script is necessary.
The great thing about cloud is you use it when you need it. Obviously, you pay for using it when you need it but often times data science applications, especially ones you’re running over large datasets, aren’t running continuously or don’t need to be structured in a way that they run continuously. Therefore, you’re talking about a very concentrated amount of spend for a very short amount of time. Buying hardware to do that means your hardware sits idle unless you are very active about making sure you’re being very efficient in the utilization of that resource over time. One of the biggest advantages of cloud is that it runs and scales as you need it to. So even a tiny can run a massive computation and run it when they need to and not consistently. That adds challenges, of course. “I fired this thing off on Friday, I come back in on Monday and it’s still running, and I accidentally spent $6,000 this weekend. Oops.” That happens all the time and so much of that is figuring out how to establish guardrails. Sometimes data science gets treated like, “You know, they’re going to do whatever they need to.”
The strength of open source technology is the fact that these products are developed with an iterative approach by a large group of experts. Open source communities are made up of diverse sets of people from across the world. This kind of diversity is beneficial because ideas and issues get vetted in multiple ways. From an enterprise perspective, open source software is a safe investment because you know there is a dedicated community with product experience. Many developers aren’t working for money, and are easy to approach and ask for help. You can raise questions or concerns directly with developers, or opt to obtain a paid support plan through the community for highly technical inquiries. ... Of course, since open source products are designed for a large audience, sometimes they won’t be able to perfectly fit a company’s needs. Fortunately, the open source approach encourages customisation and integration, meaning your own internally teams can start with an open source baseline and tweak it. Improvements can also be fed back into the open source development cycle.
Data is key. To establish a baseline, the CIO must measure the impact of the enterprise’s full technology stack, including outside partners and providers. This requires asking for, extracting, and reconciling data across external parties – and remembering to aggregate more than just decarbonization data. Cloud and sourcing choices and the disposition of assets after a cloud migration contribute to the carbon footprint. The CIO must also guide employees to make good sustainability choices. One example: according to Cisco, there are 27.1 billion devices connected to the internet – that’s more than three devices for every person on the planet. Many enterprise employees carry two mobile phones but don’t need to – existing technology enables them to segment two different environments on one device. Also, organizations with service contacts can reject hardware refreshes from a contract, empowering employees to decide if they need a new device or just a new battery.
Architects can’t architect if they don’t speak to other people. Likewise governance isn’t effective if you are talking best practice to yourself alone in a dark room someplace. Getting this right in normal times isn’t always easy. People have meetings, they are working hard and don’t want to be disturbed, they need their coffee from the corporate cafeteria or the Starbucks down the street, they’re at lunch or they’re leaving at 430 to get to their kid’s baseball game. In short, it isn’t always possible in normal times to round people up and have a day-long whiteboard session on architecture. With hybrid working models, it is even more difficult because we can’t simply walk over to the cube next to us and have a conversation. In fact, most of the time we have no idea where people actually are or what they’re doing. We rely on text, chat, Teams, Outlook and other tools to give us a sense of whether someone has 5 minutes to chat. If you want a 3 hour whiteboard session, that involves a high degree of coordination with people’s calendars in Outlook. Even then people always seem to have ‘hard stops’ at times that are really incompatible with thinking and design sessions.
Given the damage and disruption being caused by LockBit and other ransomware groups, one obvious question is why these gangs aren't being disrupted with greater frequency, says Allan Liska, principal intelligence analyst at Recorded Future. "We all know these sites are MacGyvered together with bailing wire and toothpicks and are rickety as hell. We should do stuff like this to impose cost on them," Liska says. Some members of the information security community prefer stronger measures, of the "Aliens" protagonist Ripley variety. "I always say: go kinetic and solve the problem permanently," says Ian Thornton-Trump, CISO of Cyjax. "Attribution is for the lawyers. I recommend a strike from orbit, it's the only way to be sure," he says. Another explanation for the attack would be one or more governments opting to "impose costs" on the ransomware gang, say Brett Callow, a threat analyst at Emsisoft. As he notes, the imposing-costs phrase is a direct quote from Gen. Paul M. Nakasone, the head of Cyber Command, who last year told The New York Times that the military has been tasked with not just helping law enforcement track ransomware groups, but also to disrupt them.
Quote for the day:
"The manager has a short-range view; the leader has a long-range perspective." -- Warren G. Bennis