We’ve all had to learn new ways of leading and managing. But it’s important to keep the company culture alive, and the best workplace cultures are built on a foundation of trust and autonomy. Leaders can inadvertently undermine that by monitoring employee activities too closely and checking in too often. Micromanaging can hurt morale and stifle engagement, creativity, and innovation. So, if you’ve strayed into micromanager mode, it’s time to rebalance your approach. Keep in mind that one byproduct of a remote work schedule is that people may be tackling their workload outside the usual 9-5 schedule. Maybe they’re working later in the evening or earlier in the morning, so they’ll have time to deal with the kids’ schooling. As long as quality work is getting done, does that matter? As a manager, you need to figure out what’s important and get clarity on how changes in work routines affect business goals. Align the company vision with specific business goals and make sure that the way employees complete tasks (and how you interact with your team) support those goals. That’s how you can empower your people and maintain control where it counts without overdoing it.
We had this DevOps culture of, “You own the code, so you own everything about deploying the code.” It was very much kind of like a startup mentality in terms of how we dealt with teams and DevOps. We had a large, centralized team that handled operations before that. As part of our technological transformation, we went from this large centralized operations team, where you throw your code over the wall and let them deploy it, to “You own your deploys.” In that process, we ended up basically not giving teams a whole lot of direction. ... We’ll get you the rules that you’ll need but the process is up to you. Teams started to share best practices; some teams would adopt other team’s best practices but in that kind of ecosystem there’s a lot of divergent paths you can take in how you deploy your code. That’s exactly what happened to us. We had a very fragmented ecosystem of processes. We started to have a lot of issues with that, which in turn led us to start to create policies but the policies weren’t very enforceable because we didn’t have any insight into how they were being applied in each team’s ecosystem.
The governor of the Bank of England, Andrew Bailey, has told investors they should be prepared to lose all their money if they dabble in cryptocurrencies. Crypto assets are not covered by UK schemes that help investors reclaim cash when companies go bust. The European Central Bank has compared bitcoin’s meteoric rise to other financial bubbles such as “tulip mania” and the South Sea Bubble, which burst in the 17th and 18th centuries. However, banks including Goldman Sachs and Standard Chartered have launched their own cryptocurrency trading desks to take advantage of their rapid growth. The price of bitcoin has tumbled 40% since hitting all-time highs of more than $64,000 (£45,000) in mid-April. It was trading at $38,706 on Thursday afternoon. Only five crypto asset firms have been admitted to the FCA’s formal register so far. Another 90 firms are being assessed through the temporary permit scheme, which has been extended by nine months to allow the FCA to fully review all of the applications. While a further 51 have withdrawn their applications, some may not be covered by the FCA’s rules to register, meaning not all of them will be forced to shut down.
One thing to remember about the line between CLR and C# concepts is that CLR concepts provide the possibility to make some logic work, and C# concepts provide an interface for actual developers to work with. The C# concepts are an opinionated view on the possible programs that can be written using CLR concepts, and over time, the developers of the C# language have found ways for programmers to more clearly and succinctly represent intent on a fairly regular cadence, while the fundamental capabilities provided by CLR concepts are typically much more slow to evolve. ... Having classes that behave like values has always been possible in C# and there are many types in the framework that already do this. Generally though these classes fall into the category of “data” style objects, Tuple<> for example. It’s not good or bad to do this, it’s instead an exercise in evaluating trade offs: heap vs. stack, cost of passing / returning, etc … In the case of records we wanted to explore classes first because that is what most of the customers who valued records were already using. In future versions of the language we will allow for them to be declared as structs as well though to help customers who need to make different trade off decisions.
oneAPI allows data parallelism by leveraging two types of programming: API-based programming and direct programming. Within API-based programming, the algorithm for this parallel application development is hidden behind a system-provided API. oneAPI defines a set of APIs for commonly used data-parallel domains and provides library implementations across various hardware platforms. This enables a developer to maintain performance through multiple accelerators with minimal coding and tuning. ... oneDPL has algorithms and functions to speed up DPC++ kernel programming. The oneDPL library follows the C++ standard library’s functions and includes extensions to support data parallelism and extensions to simplify data-parallel algorithms. ... oneMKL is used for fundamental mathematical routines in high-performance computing and applications. This functionality is divided into dense linear algebra, sparse linear algebra, discrete Fourier transforms, random number generators, and vector math. ... oneDAL helps speed up big data analyses by providing optimised building blocks for algorithms for different stages of data analytics—preprocessing, transformation, analysis, modelling, validation, and decision making.
Large corporations now have the resources and relationships to access machines directly, and those machines are available from IBM, from Honeywell, and from other companies as well. It’s also now possible to subscribe to these machines, because some of the big cloud providers (Amazon Web Services and Azure are two examples) have taken initial steps towards offering what we might describe as quantum processing units alongside regular high-performance computing. Those early access agreements are now available for subscription, sometimes on a daily or even an hourly basis. And then beneath all of that, there is a clutch of start-ups like IQM in Finland, Alpine Quantum Technologies in Austria and Oxford Quantum Computing in the UK that are all on a very steep trajectory. Their processors will be available in a variety of ways. All of this means that a large corporate entity has a variety of ways of accessing quantum processors, and what we do is to pull all of that together. We have two distinguishing features.
One concern is managing a hybrid team, where some people are at home and others are at the office. I hear endless anxiety about this generating an office in-group and a home out-group. For example, employees at home can see glances or whispering in the office conference room but can’t tell exactly what is going on. Even when firms try to avoid this by requiring office employees to take video calls from their desks, home employees have told me that they can still feel excluded. They know after the meeting ends the folks in the office may chat in the corridor or go grab a coffee together. The second concern is the risk to diversity. It turns out that who wants to work from home after the pandemic is not random. In our research we find, for example, that among college graduates with young children women want to work from home full-time almost 50% more than men. This is worrying given the evidence that working from home while your colleagues are in the office can be highly damaging to your career. In a 2014 study I ran in China in a large multinational we randomized 250 volunteers into a group that worked remotely for four days a week and another group that remained in the office full time.
For those organizations not involved in the development of quantum computers, preparatory actions are clear. We must urgently overcome our inability to keep existing computers secure; the quantum computer of the future will be of little use if we fail to break our dependency on legacy technology and poor management practices today. And as quantum computing improves, we must remain in front of our adversaries by leveraging new technology before it is adopted by those who wish to do us harm. ... Quantum computing is far too immature for any immediate real-world application or for us to see the benefits that its theory promises. We can make some educated guesses, though. Peter McMahon, Applied and Engineering Physics at Cornell University, writes of quantum computing capabilities, “We’re trying to find something useful we can do with a near-term quantum computer that would answer a question in quantum gravity, or high-energy physics more generally, that couldn’t be answered otherwise, for instance, can we simulate a model of a black hole on a quantum computer? Would that be useful? We don’t know if we’ll find anything, but it’s very interesting to try.”
The initial point of entry for the attack was an unpatched enterprise Microsoft Exchange server, from which attackers used Windows Management Instrumentation (WMI) – a scripting tool for automating actions in the Windows ecosystem, primarily used on servers – to install other software onto machines inside the network that they could reach from the Exchange server. It’s not entirely clear if attackers leveraged the infamous Exchange ProxyLogon exploit that was a major pain point for Microsoft earlier in the year. However, the unpatched server used in the attack was indeed vulnerable to this exploit, Brandt observed. During the attack, threat actors launched a series of PowerShell scripts, numbered 1.ps1 through 12.ps1, as well as some that were named with a single letter from the alphabet, to prepare the attacked machines for the final ransomware payload. The scripts also delivered and initiated the Epsilon Red payload, he wrote. The PowerShell scripts use a “rudimentary form of obfuscation” that didn’t hinder Sophos researchers’ analysis but “might be just good enough to evade the detection of an anti-malware tool that’s scanning the files on the hard drive for a few minutes, which is all the attackers really need,” Brandt noted.
Hasura can implement API caching for dynamic data automatically because Hasura’s metadata configuration has got detailed information about both the data models as well as the authz rules that in turn have information about which user can access what data. And this is very useful because, otherwise, developers often need to manually build web APIs that provide data access manually. Moreover, devs need to have deep domain knowledge so that they can also then build caching strategies that recognize what queries to the cache for which users/user groups, using caching stores like Redis to provide API caching. But this is just a part of the problem. The harder bit is cache invalidation. Developers use TTL-based caching to avoid worrying about caching invalidation vs consistency and let the API consumers deal with the inconsistency. Hasura, can, in theory, provide automated cache invalidation as well because Hasura has deep integrations into the sources of data and all access to this data can go through Hasura, or use the data source’s CDC mechanism. This part of the caching problem is similar to the “materialized view update” issue.
Quote for the day:
"Speak softly and carry a big stick; you will go far. -- Theodore Roosevelt