Daily Tech Digest - July 10, 2022

Customer.io Email Data Breach Larger Than Just OpenSea

The company is not revealing how many emails are now at heightened risk of phishing attempts as a result of the "deliberate actions" of the former employee. Non-fungible token marketplace platform OpenSea partially divulged the incident late last month when it warned anyone who had ever shared an email address with it about the unauthorized transfer of contact information. Approximately 1.9 million users have made at least one transaction on the platform, shows data from blockchain market firm Dune Analytics. Customer.io did not identify the other affected companies to Information Security Media Group or specify the sectors in which they operate. The affected parties have been alerted, the company says. The incident underscores the continuing threat posed by insiders, who account for 20% of all security incidents, according to the most recent Verizon Data Breach Incident Report. The costs of insider breaches, whether caused by human error or bad actors, are going up, and the Ponemon Institute found a 47% increase over the past two years.


Making the DevOps Pipeline Transparent and Governable

When DevOps was an egg, it really was an approach that was radically different from the norm. And what I mean, obviously for people that remember it back then, it was the continuous... Had nothing to do with Agile. It was really about continuous delivery of software into the environment in small chunks, microservices coming up. It was delivering very specific pieces of code into the infrastructure, continuously, evaluating the impact of that release and then making adjustments and change in respect to the feedback that gave you. So the fail forward thing was very much an accepted behavior, what it didn't do at the time, and it sort of glossed over it a bit, was it did remove a lot of the compliance and regulatory type of mandatory things that people would use in the more traditional ways of developing and delivering code, but it was a fledging practice. And from that base form, it became a much, much bigger one. So really what that culturally meant was initially it was many, many small teams working in combination of a bigger outcome, whether it was stories in support of epics or whatever the response was.


SQL injection, XSS vulnerabilities continue to plague organizations

Critical and high findings were low in mobile apps, just over 7% for Android apps and close to 5% for iOS programs. Among the most common high and critical errors in mobile apps identified in the report were hard-coded credentials into apps. Using these credentials, attackers can gain access to sensitive information, the report explained. More than 75% of the errors found in APIs were in the low category. However, the report warns that low risk doesn’t equate to no risk. Threat actors don’t consider the severity of the findings before they exploit a vulnerability, it warned. Among the highest critical risks found in APIs were function-level controls missing (47.55%) and Log4Shell vulnerabilities (17.48%). Of all high and critical findings across companies, the report noted, 87% were found in organizations with fewer than 200 employees. The report identified several reasons for that, including cybersecurity being an afterthought in relatively small organizations; a dearth of bandwidth, security know-how, and staffing; a lack of security leadership and budget; and the speed of business overpowering the need of doing business securely.


Three golden rules for building the business case for Kubernetes

Cost, customer service and efficiency are the three typical considerations any business weighs up when it comes to making new investments. Will a new initiative reduce costs in the long run, and be worth the initial expense, is a question decision makers weigh up all the time. Kubernetes does this because it addresses the challenge that comes in managing the potentially thousands or tens of thousands of containers a large enterprise might have deployed. ... The second consideration is whether the investment will mitigate the risk of losing a customer. Is the ability to serve their needs improved as a result of the changes? Again, Kubernetes meets the criteria here. By taking a microservices approach to applications, it allows them and the underlying resources they need to be scaled up or down, based on the current needs of the organization. ... The third and final consideration is whether the new technology or initiative will improve the ways the business operates. What might it achieve that a business couldn’t do before? 


Infrastructure-as-Code Goes Low Code/No Code

The cross-disciplinary skills required by IaC — someone with security, operations and coding experience — is a niche, Thiruvengadam told The New Stack. The San Jose, Calif.-based DuploCloud targets that need with a low-code/no-code solution. “The general idea with Duplo cloud is that you can use infrastructure-as-code, but you just have to write a lot less lines of code,” he said. “A lot of people who don’t have all the three skill sets still can operate at the same scale and efficiency, using this technology — that’s fundamentally the core advantage.” Unlike some solutions, which rely on ready-made modules or libraries, Thiruvengadam said that DuploCloud uses a low code interface to put together the rules for its rules-based engine, which then runs through the rules to produce the output. The self-hosted single-tenant solution is deployed within the customer’s cloud account. Currently, it supports deployment on Amazon Web Services, Microsoft Azure and Google Cloud, and it can run on-premise as well.


The Compelling Implications of Using a Blockchain to Record and Verify Patent Assignments

Smart contracts could be used to put various types of conditions and obligations on a patent asset. For example, companies might incentivize their inventors to disclose more inventions by placing an obligation on all future owners of an asset to pay the inventors some percentage of future licensing, sales, settlements, or judgments involving to that asset (e.g., the inventors get 10% of the total value of such transactions). This would allow inventors of commercially-valuable patents to enjoy the financial benefits of their inventions in a fashion that is more equitable than, say, a one-time nominal payout upon filing or grant. Since patents can only be asserted when all owners agree to do so, such contracts would have to clearly separate ownership of a patent asset from an obligation of the owner to compensate a previous owner for the asset's future revenue. Another potential use of smart contracts would be for ownership of an issued patent to revert to its previous owner should the current owner fail to pay maintenance fees on time. 


Breaking down the crypto ecosystem

According to several Indian and global reports reduction in transaction costs is further expected to propel this market growth in the next few years. In line with global trends, increasing adoption of the digital currency by businesses coupled with talks of a government-backed digital currency in the country, is further anticipated to bolster the growth of the cryptocurrency market. In the present Web 2.0 environment establishing trust and creating social identities of the network participants has been an uphill task that the ecosystem is unable to overcome. And since almost all economic value is traded based on human relationships, it is a fundamental roadblock to innovation and growth in Web 2.0. However, the outburst of cryptocurrency and Blockchain has fuelled a rapid transition towards Web 3.0 where we have witnessed exponential growth especially in enablers like NFTs, which has made possible acquiring, storing, and distributing economic value among users. In fact, the introduction of SBTs (SoulBound Tokens) could be the final piece in the puzzle for the Web 3.0 ecosystem.


What is observability? A beginner's guide

For decades, businesses that control and depend on complex distributed systems have struggled to deal with problems whose symptoms are often buried in floods of irrelevant data or those that show high-level symptoms of underlying issues. The science of root cause analysis grew out of this problem, as did the current focus on observability. By focusing on the states of a system rather than on the state of the elements of the system, observability provides a better view of the system's functionality and ability to serve its mission. It also provides an optimum user and customer experience. Observability is proactive where necessary, meaning it includes techniques to add visibility to areas where it might be lacking. In addition, it is reactive in that it prioritizes existing critical data. Observability can also tie raw data back to more useful "state of IT" measures, such as key performance indicators (KPIs), which are effectively a summation of conditions to represent broad user experience and satisfaction.


10 trends shaping the chief data officer role

CDOs may need to rethink cybersecurity in response to the growth of the data sources and volume of data, said Christopher Scheefer, vice president of intelligent industry at Capgemini Americas. These new and nontraditional data streams require additional methods of securing and managing access to data. "The importance of cybersecurity in a pervasively connected world is a trend many CDOs cannot ignore due to the growing threats of IP infringement, regulatory risks and exposure to a potentially damaging event," Scheefer said. Rethinking and reimagining cybersecurity is no small feat. The level of complexity of integrating connected products and operations into the business presents an incredible amount of risk. Establishing proper governance, tools and working with cybersecurity leadership is critical. It is the CDO's job to ensure the business does not constrain itself, limiting external connections and services that could bring competitive advantage and paths to growth, Scheefer said.


Streamlining Unstructured Data Migration for M&A and Divestitures

It’s common to take all or most of the data from the original entity and dump it onto storage infrastructure at the new company. While this may seem like the simplest way to handle a data migration, it’s problematic for several reasons. First, it’s highly inefficient. You end up transferring lots of data that the new business may not actually need or records for which the mandatory retention period may have expired. A blind data dump from one business to another also increases the risk that you’ll run afoul of compliance or security requirements that apply to the new business entity but not the original one. For instance, the new business may be subject to GDPR data privacy mandates because of its location in Europe. But if you simply move data between businesses without knowing what’s in the data or which mandates it needs to meet, you’re unlikely to meet the requirements following the transfer. Last but not least, blindly moving and storing data deprives you of the ability to trace the origins of data after the fact. 



Quote for the day:

"Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships." -- Lee Ellis

Daily Tech Digest - July 09, 2022

Ray Kurzweil Wants to Upload Your Brain to the Cloud

Well, this can go one of two ways. Either this brain/cloud situation will be an incredibly beneficial superpower, or it could be just another farming device for data mining and ad sales. My take: If it’s a beneficial superpower then it won’t be given to the general public. Superpower for the rich. Farming device for the regular people. And thank you very much but I am farmed enough. My Hinge updates don’t need to be sent to my cerebellum. I can’t talk about taking a trip to Costa Rica without flights popping up on my phone. I’m grateful for the ways technology has touched my life but let me remind people about the Flo app. This is a period and fertility tracking app that settled with the FTC in May for selling its users’ personal health data without their knowledge. While there are definitely huge potential advances that could be made from brain/cloud merges, I can only think of social media companies that are designed to addict us, with at least one of these apps in the recent past tracking our eye movements to see what we liked so we could be coaxed to spend more time using it. It’s not all bad but I am not looking to plug in forever. And I don’t trust these companies to do good.


NIST’s pleasant post-quantum surprise

To understand the risk, we need to distinguish between the three cryptographic primitives that are used to protect your connection when browsing on the Internet: Symmetric encryption - With a symmetric cipher there is one key to encrypt and decrypt a message. They’re the workhorse of cryptography: they’re fast, well understood and luckily, as far as known, secure against quantum attacks. ... Symmetric encryption alone is not enough: which key do we use when visiting a website for the first time? We can’t just pick a random key and send it along in the clear, as then anyone surveilling that session would know that key as well. You’d think it’s impossible to communicate securely without ever having met, but there is some clever math to solve this. Key agreement - also called a key exchange, allows two parties that never met to agree on a shared key. Even if someone is snooping, they are not able to figure out the agreed key. Examples include Diffie–Hellman over elliptic curves, such as X25519. The key agreement prevents a passive observer from reading the contents of a session, but it doesn’t help defend against an attacker who sits in the middle and does two separate key agreements: one with you and one with the website you want to visit.


Buggy 'Log in With Google' API Implementation Opens Crypto Wallets to Account Takeover

The first bug involved the common feature found in mobile apps that allow users to log in using an external service, like Apple ID, Google, Facebook, or Twitter. In this case, the researchers examined the "log in with Google" option — and found that the authentication token mechanism could be manipulated to accept a rogue Google ID as being that of the legitimate user. The second bug allowed researchers to get around two-factor authentication. A PIN-reset mechanism was found to lack rate-limiting, allowing them to mount an automated attack to uncover the code sent to a user's mobile number or email. "This endpoint does not contain any sort of rate limiting, user blocking, or temporary account disabling functionality. Basically, we can now run the entire 999,999 PIN options and get the correct PIN within less than 1 minute," according to the researchers. Each security issue on its own provided limited abilities to the attacker, according to the report. "However, an attacker could chain these issues together to propagate a highly impactful attack, such as transferring the entire account balance to his wallet or private bank account."


How To Become A Self-Taught Blockchain Developer

The Blockchain developer must provide original solutions to complex issues, such as those involving high integrity and command and control. A complicated analysis, design, development, test, and debugging of computer software are also performed by the developer, particularly for particular product hardware or for technical service lines of companies. Develops carry out computer system selection, operating architecture integration, and program design. Finally, they use their understanding of one or more platforms and programming languages while operating on a variety of systems. There will undoubtedly be challenges for the Blockchain developer. For instance, the developer must fulfill the criteria of a Blockchain development project despite using old technology and its restrictions. A Blockchain developer needs specialized skills due to the difficulties in understanding the technological realities of developing decentralized cryptosystems, processes that are beyond the normal IT development skill-set. 


Machine learning begins to understand human gut

While human gut microbiome research has a long way to go before it can offer this kind of intervention, the approach developed by the team could help get there faster. Machine learning algorithms often are produced with a two step process: accumulate the training data, and then train the algorithm. But the feedback step added by Hero and Venturelli's team provides a template for rapidly improving future models. Hero's team initially trained the machine learning algorithm on an existing data set from the Venturelli lab. The team then used the algorithm to predict the evolution and metabolite profiles of new communities that Venturelli's team constructed and tested in the lab. While the model performed very well overall, some of the predictions identified weaknesses in the model performance, which Venturelli's team shored up with a second round of experiments, closing the feedback loop. "This new modeling approach, coupled with the speed at which we could test new communities in the Venturelli lab, could enable the design of useful microbial communities," said Ryan Clark, co-first author of the study, who was a postdoctoral researcher in Venturelli's lab when he ran the microbial experiments.


Jorge Stolfi: ‘Technologically, bitcoin and blockchain technology is garbage’

It is the only thing that blockchain could contribute: the absence of a central authority. But that only creates problems. Because to have a decentralized database you have to pay a very high price. You must ensure that all miners do “proof of work.” It takes longer, and it is not even secure because in the past there have been occasions where they have had to rewind several hours worth of blocks to remove a bad transaction, in 2010 and 2013. The conditions that made that possible are still there and that’s why blockchain technology is a fraud: it promises to do something that people already know how to do. ... It is the only digital system that does not follow customary money laundering laws. That’s why criminals use it. Once you have paid a ransom, there is no way for the victim to cancel the payment and get the money back, not even the government can do it easily. It is anonymous and when a hacker encrypts your data, they do not have to enter your system directly, where they would leave a trace. He has botnets, computers that he has already hacked, so tracking him down is difficult.   


How to Write Secure Source Code for Proprietary Software

Source code is at the mercy of developers and anyone else that has access to it. That means limiting access to your source code and establishing security guidelines for those with access is vital for increasing security. It's also important to realize that insider threat actors aren't always malicious. Often, insider threats come from mistakes or negligent actions taken by employees. ...  Outside threats come from outside of your development team. They may come from competitors that want to use the code to improve their own. Or, they can come from hackers who will attempt to sell your source code or pick it apart looking for vulnerabilities. The point is, whether a leak comes from inside or outside threats, it can have terrible consequences. Source code leaks can lead to additional attacks, exposing large amounts of sensitive data. Source code leaks can also lead to financial losses by giving competitors an advantage. And your customers will think twice before dealing with a developer that has exposed valuable customer data in the past.


How IoT and digital twins could help CIOs meet ESG pledges

This inevitably leads to accusations of greenwashing, where marketing departments hijack the ambitions of organisations before any serious, robust plan is in place. For CIOs tasked with bringing down emissions and adhering to targets, this can be a huge problem. A recent IBM CEO study finds that CEOs are coming under increasing pressure from stakeholders to act on sustainability. It cites “frustrations” with organisations’ “all talk and no action”. Culture is seen as a significant issue in hampering any attempts to co-ordinate carbon emission strategies. “If you want to avoid the trap of greenwashing, it needs to start with the CEO,” says Alicia Asín, CEO of Libelium, an IoT business based in Zaragoza, Spain. Asín, speaking on a panel at IoT World Congress, added that this creates a culture where the whole organisation needs to look at the design and sustainability credentials of every technology offering for every sustainable project. She used an example of a farm customer that is using IoT to reduce the amount of water in irrigation and to reduce the level of pesticides being used on their crops.


GitHub Copilot is the first real product based on large language models

The success of GitHub Copilot and Codex underline one important fact. When it comes to putting LLMs to real use, specialization beats generalization. When Copilot was first introduced in 2021, CNBC reported: “…back when OpenAI was first training [GPT-3], the start-up had no intention of teaching it how to help code, [OpenAI CTO Greg] Brockman said. It was meant more as a general purpose language model [emphasis mine] that could, for instance, generate articles, fix incorrect grammar and translate from one language into another.” But while GPT-3 has found mild success in various applications, Copilot and Codex have proven to be great hits in one specific area. Codex can’t write poetry or articles like GPT-3, but it has proven to be very useful for developers of different levels of expertise. Codex is also much smaller than GPT-3, which means it is more memory and compute efficient. And given that it has been trained for a specific task as opposed to the open-ended and ambiguous world of human language, it is less prone to the pitfalls that models like GPT-3 often fall into.


LockBit explained: How it has become the most popular ransomware

After obtaining initial access to networks, LockBit affiliates deploy various tools to expand their access to other systems. These tools involve credential dumpers like Mimikatz; privilege escalation tools like ProxyShell, tools used to disable security products and various processes such as GMER, PC Hunter and Process Hacker; network and port scanners to identify active directory domain controllers, remote execution tools like PsExec or Cobalt Strike for lateral movement. The activity also involves the use of obfuscated PowerShell and batch scripts and rogue scheduled tasks for persistence. Once deployed, the LockBit ransomware can also spread to other systems via SMB connections using collected credentials as well as by using Active Directory group policies. When executed, the ransomware will disable Windows volume shadow copies and will delete various system and security logs. The malware then collects system information such as hostname, domain information, local drive configuration, remote shares and mounted storage devices then will start encrypting all data on the local and remote devices it can access.



Quote for the day:

"If you want people to to think, give them intent, not instruction." -- David Marquet

Daily Tech Digest - July 07, 2022

Metaverse Standards Forum Makes Data Interoperable But Only For Big Tech

Interoperability is the driving force for the growth and adoption of the open metaverse. Hence, the Metaverse Standards Forum aims to analyze the interoperability necessary for running the metaverse. More than 30 companies took up their respective posts as founding members of the forum. Game developers, architects, and engineers are mere clicks away from building the next cutting-edge metaverse project with artificial intelligence and advanced hardware. Setting interoperability standards with consideration to available technology is crucial to the mass adoption of the metaverse. Similar to the Metaverse Standards Forum, some key players are missing from the Oasis Consortium, like Meta. And in the past, groups like this have become smaller and smaller once internal conflict inevitably arises. The Metaverse Standards Forum is led by the Khronos Group, a nonprofit consortium working on AR/VR, artificial intelligence, machine learning, and more. Khronos has already tried to set a standard for VR APIs with its similarly named VR Standards Initiative in 2016, which included companies like Google, Nvidia. Epic Games and Oculus, which is now part of Meta.


Identity Access Management Is Set for Exploding Growth, Big Changes — Report

As SaaS and cloud subscription services have proliferated in the space, smaller firms increasingly have found IAM within their reach, and this study says to expect this trend to snowball. Whereas the subscription model makes up 60% of the market now, in five years the researchers forecast it will make up 94% of all IAM spending. Meanwhile, other, broader IT trends such as the explosion in cloud computing, bring-your-own-device (BYOD) policies, mobile computing, Internet of Things (IoT), and more geographically dispersed workers are all spurring greater IAM services spending to solve an acute need for saner access control. "There are more devices and services to be managed than ever before, with different requirements for associated access privileges," according to Juniper's analysts. "With so much more to keep track of, as employees migrate through different roles in an organization, it becomes increasingly difficult to manage identity and access." According to Naresh Persaud, managing director in cyber-identity services for Deloitte Risk & Financial Advisory, the market has been especially jumpstarted in the last 12 to 18 months as organizations work to accommodate a broader range and larger scale of remote-work situations.


Working with Microsoft’s .NET Rules Engine

Getting started with the .NET Rule Engine is relatively simple. You will need to first consider how to separate rules from your application and then how to describe them in lambda expressions. There are options for building your own custom rules using public classes that can be referred to from a lambda expression, an approach that gets around the limitations associated with lambda expressions only being able to use methods from .NET’s system namespace. You can find a JSON schema for the rules in the project’s GitHub repository. It’s a comprehensive schema, but in practice, you’re likely to only need a relatively basic structure for your rules. Start by giving your rules workflow a name and then following it up with a nested list of rules. Each rule needs a name, an event that’s raised if it’s successful, an error message and type, and a rule expression that’s defined as a lambda expression. Your rule expression needs to be defined in terms of the inputs to the rules engine. Each input is an object, and the lambda function evaluates the various values associated with the input. 


10 Questions to Ask Yourself Before Starting Your Entrepreneurial Journey

Entrepreneurship is over-glorified and misrepresented on social media. In reality, it is about building a business that solves a problem for a consumer. It's not about driving nice cars or posting nice pictures on social media. In fact, real entrepreneurship looks quite contrary to what we see on social media. Do we require a certain level of luck, genetics and an environment around us to be an entrepreneur? Yes — somewhat, for sure. But also, anyone can solve problems anywhere in the world. That is true for both small problems and big problems. The choice comes in the decision to find people who have needs, wants and issues that you can offer a solution for. It is also a choice that each of us gets to make on how well we wish to solve that issue — how obsessed we are willing to become with that solution and how above and beyond we are willing to go with servicing the customers well. Beyond the business solution also comes the personal and emotional responsibility — shaping and growing ourselves to be able to handle and maneuver through constant stress and difficulties. 


Don’t let automation break change management

Where automation is essential and unavoidable, network teams need to make sure all the good they can do with automation is not done at the expense of or in conflict with one of the other pillars of enterprise IT: change management. They need to make sure automation is controlled by change management, and that they are keeping change management processes in step with their increasing reliance on automation. One aspect is to implement change management on the automation, including the scripts, config files, and playbooks, used to manage the network. The use of code management tools helps with this: check-out and check-in events help staff remember to follow other parts of proper process. Applying change management at this level means describing the intended modifications to the automation, testing them, planning deployment, having a fallback plan to the previous known-good code where that is applicable, and determining specific criteria by which to judge whether the change succeeded or needs to be rolled back.


Imagination is key to effective data loss prevention

SecOps teams are charged with protecting data on a network or endpoint in each of its forms: at rest, in use, and in motion. To be in the driver’s seat and create the appropriate rules or policies to protect data across these three classifications requires teams to understand their environment fully. This is why organizations should consider implementing a flexible, scalable XDR (extended detection and response) architecture that can seamlessly integrate with their current security tools and connect all the dots to eliminate security gaps. With native integrations and connections for security policy orchestration across data and users, endpoints and collaboration, clouds and infrastructure, an XDR architecture provides SecOps teams with maximum visibility and control. ... Knowing what to protect, even before establishing protection, is key. So much so that comprehensive data visibility is a critical tenet for any SecOps team. Achieving this enables security teams to have the flexibility to create data protection parameters tailored to their own specific needs, creating an environment where the only limit on what they can achieve is their imagination.


The importance of digital skills bootcamps to UK tech industry success

The success of digital skills bootcamps in helping to secure the UK tech industry’s future is heavily contingent on the level of involvement from businesses. At present, however, not enough organisations are devoting the time needed to upskill or reskill staff, with research conducted by MPA Group finding that over a third of companies – 35 per cent – only allow workers to devote less than two hours per week to training, research, and development. Although there may be a number of reasons for this, MPA Group’s research indicated that ‘a lack of budget’ was considered by businesses to be the largest barrier for workplaces allowing staff to spend time on development. Digital skills bootcamps are helping to solve this problem by enabling companies to take advantage of the considerable state investment in the initiative, meaning organisations are given more affordable access to industry-led training. What’s more, with bootcamps having already been trialled to great success in places like the West Midlands – where approximately 2,000 adults have been trained with essential tech skills over the past few years – firms have the opportunity to hire recent programme graduates who can help impart what they have learned onto their workers.


The Parity Problem: Ensuring Mobile Apps Are Secure Across Platforms

So to build a robust defense, mobile developers need to implement a multi-layered defense that is both ‘broad’ and ‘deep’. By broad, I'm talking about multiple security features from different protection categories, which complement each other, such as encryption + obfuscation. By ‘deep’, I mean that each security feature should have multiple methods of detection or protection. For example, a jailbreak-detection SDK that only performs its checks when the app launches won’t be very effective because attackers can easily bypass the protection. Or consider anti-debugging, which is an important runtime defense to prevent attackers from using debuggers to perform dynamic analysis – where they run the app in a controlled environment for purposes of understanding or modifying the app’s behavior. There are many different types of debuggers – some based on LLDB – for native code like C++ or objective C, others that inspect at the Java or Kotlin layer, and a lot more. Every debugger works a little bit differently in terms of how it attaches to and analyzes the app.


4 ways CIOs can create resilient organizations

As CIO, you need to make sure your technology investments enable change. After all, you might need to support an entirely remote employee population. You might need to offer new capabilities that attract top talent or quickly shut down business in a region wracked by geopolitical conflict. Organizations invest large sums in migrating to the cloud. One reason is the ability to grow with needs. But technology scale is no longer the primary benefit of the cloud. And scale is no longer a guarantee of resilience. Rather, focus your cloud and software-as-a-service (SaaS) investments on supporting rapid change. Multi-cloud strategy, containerization, agile DevSecOps development methodologies: All should be designed around elasticity that equips you to make quick wins or pivot to new business models. ... Data analytics can provide holistic views and predictive models that help CIOs and others understand emerging trends. Those insights support data-driven decision-making and ultimately, resilience. That’s because you no longer have to rely on gut feel to prepare for an otherwise unpredictable future. 


What happens when there’s not enough cloud?

Most companies struggle to find enough customers to buy their products. According to Selipsky in a Mad Money interview, cloud companies like AWS might have the opposite problem. “IT is going to move to the cloud. And it’s going to take a while. You’ve seen maybe only, call it 10% of IT today move. So it’s still day 1. It’s still early. … Most of it’s still yet to come.” Years ago I noted that the cloud will take time. Not because there’s limited demand, but precisely because even with enterprises on a full sprint to the cloud, there are trillions of dollars’ worth of IT to modernize. As MongoDB CMO Peder Ulander responded to McLaughlin, “If anything, the growing shortage of capacity is a watershed moment for AWS, Google Cloud, and Microsoft Azure.” (Disclosure: I work for MongoDB.) In a hot market, it’s standard for demand to outstrip supply. Ulander cites products as diverse as Teslas or Tickle Me Elmo toys. What’s interesting here is that we’re having the enterprise equivalent of a 1996 Tickle Me Elmo shortage. 



Quote for the day:

"Leaders know the importance of having someone in their lives who will unfailingly and fearlessly tell them the truth." -- Warren G. Bennis

Daily Tech Digest - July 06, 2022

10 Things You Are Not Told About Data Science

Many data scientists become disillusioned when they are hired for statistics and machine learning, but instead find themselves being the resident “IT expert” instead. This phenomena is not new and actually predates data science. Shadow information technology (shadow IT) describes office workers who create systems outside their IT department. This includes databases, dashboards, scripts, and code. This used to be frowned on in organizations, as it is unregulated and operating outside the IT department’s scope of control. However, one benefit of the data science movement is it has made shadow IT more accepted as a necessity for innovation. Rather than be disillusioned, a data scientist can gain proficiency in SQL, programming, cloud platforms, web development, and other useful technologies. After all, a data scientist works with data and that implicitly can lead to IT-work. It can also make their work streamlined and more accessible to others, and open up possibilities for statistical and machine learning models.


The connected nature of smart factories is exponentially increasing the risk of cyber attacks

The research found that, for many organizations, cybersecurity is not a major design factor; only 51% build cybersecurity practices in their smart factories by default. Unlike IT platforms, all organizations may not be able to scan machines at a smart factory during operational uptime. System-level visibility of IIoT and OT devices is essential to detect when they have been compromised; 77% are concerned about the regular use of non-standard smart factory processes to repair or update OT/IIOT systems. This challenge partly originates from the low availability of the correct tools and processes, however 51% of organizations, said that smart factory cyberthreats primarily originate from their partner and vendor networks. Since 2019, 28% noted a 20% increase in employees or vendors bringing in infected devices, such as laptops and handheld devices, to install/patch smart-factory machinery. ... When it comes to incidents, only a few of the organizations surveyed claimed that their cybersecurity teams have the required knowledge and skills to carry out urgent security patching without external support.


Google’s Powerful Artificial Intelligence Spotlights a Human Cognitive Glitch

The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs. The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions. However, in the case of AI systems, it misfires – building a mental model out of thin air. A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.” The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible. 


VMware report finds org modernization cannot succeed without observability

Enterprises have evolved their cloud strategies to multicloud environments and are adopting more containers, microservices and cloud-native technologies. This is creating increasingly distributed systems, making it harder to gain a comprehensive view into how they’re performing, Weiss said. As a result, legacy monitoring tools are obsolete for modern applications. “The reason for that is the change to cloud computing multi-services. Together with the amount of data that is being generated in these applications, you can’t cope with it anymore,” Weiss said. Monitoring merely collects data from the system and alerts admins to something being wrong. Observability goes beyond monitoring to interpret the data, providing answers on why something is wrong and how to fix it, allowing teams to pinpoint the root cause, minimize downtime and increase operational efficiency. “Previously, the solution was to put an agent on the server that can do everything, collect everything – but there is no place to put the agent anymore,” Weiss told VentureBeat. “Services are becoming very volatile. They’re disappearing. They’re here now, they’re not here tomorrow. I’m not even talking about serverless. So, that’s a change that is trending.”


A breakthrough algorithm developed in the US can predict crimes a week ahead

The concept might sound interesting, but the actual application was dodgy. As investigations later showed, almost half of the alleged perpetrators on the list had never been charged for illegal possession of arms, while others had not been charged with serious offenses before. A Technology Review report in 2019 detailed how risk assessment algorithms that determined whether an individual should be sent to jail or not were trained on historically biased data. So, when researchers at the University of Chicago, led by assistant professor Ishanu Chattopadhyay, tried to build their algorithm, they wanted to avoid past mistakes. The algorithm divides a city into 1,000 square feet tiles and uses the historical data on violent and property crimes to predict future events. The researchers told Bloomberg that their model is different from other such algorithmic predictions since the other look at crime as emerging from hotspots and spreading to other areas. However, such approaches, the researchers argue, miss the complex social environment of cities and are also biased by the surveillance used by the state for law enforcement. 


7 key new features in SingleStoreDB

SingleStore has also enhanced SingleStoreDB with the addition of Code Engine with Wasm. Now users can bring external data and compute algorithms to power new real-time use cases within the database engine, drawing on WebAssembly. With Code Engine with Wasm, developers can securely, natively, and efficiently execute rich computation in the database using their programming language of choice. For computations and algorithms that are not easily expressed in SQL, Wasm support in SingleStoreDB brings algorithms to the data without having to move that data outside of the database. With SingleStoreDB Universal Language support, enterprises can now quickly integrate machine learning into real-time applications and dashboards.  ... The latest release of SingleStoreDB also includes Data API, enabling seamless integrations with applications. Developers can use Data API to build serverless applications including web and mobile apps. Data API uses HTTP to run SQL operations against the database rather than maintaining a persistent TCP connection. The connection is dynamically reconfigured, and each request-response is its own connection.


Researchers Infuse ‘Human Guesses’ In Robots To Navigate Blind Spots

A novel methodology developed by MIT and Microsoft researchers identifies instances in which autonomous systems have “learned” from training samples that don’t reflect what happens in the real world. Engineers may employ this idea to improve the security of robots and autonomous vehicles that use artificial intelligence. For instance, to prepare them for nearly every eventuality on the road, the artificial intelligence (AI) systems that drive autonomous cars go through extensive training in virtual simulations. But occasionally the car makes an unforeseen error as a result of a situation that ought to alter the way it acts but doesn’t. Consider an autonomous car without the necessary sensors, which would be unable to discern between drastically different conditions like large, white cars and ambulances with red, flashing lights on the road. A driver may not know to slow down and pull over when an ambulance starts its sirens as it is traveling down the highway because it cannot tell the ambulance from a huge white sedan. Like with conventional methods, the researchers trained an AI system using simulations. 


Integrating blockchain-based digital IDs into daily life

While blockchain’s elevator pitch is heavily inclined toward immutability, the technology boasts multiple advantages over traditional software and paper-based systems. The opinions regarding the benefits of blockchain boil down to the control over personal information. Self-sovereignty stands as one of the biggest benefits of blockchain-based digital IDs, according to Martis. This means that blockchain empowers users to share partial or selective information with their service providers instead of handing over their complete identity. With blockchain-based IDs eradicating the misuse of information, experts envision the birth of a truly trustless system without the involvement of third parties. Gentry, too, reiterated verifiability, traceability and uniqueness as some of the major benefits brought about by blockchain, as she highlighted that blockchain IDs cannot be duplicated because it's on the distributed ledger. “All the Digital ID can be verified on the blockchain and can be traced back to the owners' account which can also be used for Know Your Customer,” she added.


Neurodiversity in Cybersecurity: Broadening Perspectives, Offering Inclusivity

“There are not enough skilled people in this field, but neurodivergent individuals bring an essential skillset to cybersecurity -- hyper focus on analyzing data and identifying trends,” explains Rex Johnson, executive director of cybersecurity at CAI. “Not everyone has this ability, or at least do it well, except for neurodiverse talent.” To reach out to neurodiverse professionals, Johnson says organizations must look beyond traditional recruiting methods. “Depending on the need, consider a team of neurodivergent individuals who work under a supervisor who understands how to manage this dynamic and be the liaison to other management teams,” he advises. They can look for organizations that implement an end-to-end neurodiversity employment program that not only bring the right neurodivergent teammate in the door, but also work with the employer to create workplace accommodations that increase retention, morale, and productivity. “Not everyone is the same. People are inspired and motivated by many different visions and missions,” Johnson adds.


Staying protected amidst the cyber weapons arms race

Most would not like to admit it, but vulnerabilities are inevitable. Although a ransomware event is likely to affect an organisation at some point, ransomware itself is not completely out of the control of a business. Vendors have an ethical imperative to be transparent with the customer community when they become aware of a vulnerability in their product, providing clear assessment of impact and steps to remediate. As soon as any vulnerability in its software is known, speed and effectiveness in sharing relevant information and patches with customers and stakeholders are crucial. Once alerted, the impacted customer community then has a shared responsibility to action this information, in the context of the impact on their business and what that means for their resilience and continuity of operations. Here the vendor’s responsibility clearly becomes double-edged. Vendors must be transparent so their customers can apply the fix, yet this sets off a ticking time bomb as threat actors continuously scour the internet for this type of information, hoping to exploit the vulnerability before organisations have had time to apply the patch. 



Quote for the day:

"People seldom improve when they have no other model but themselves." -- Oliver Goldsmith

Daily Tech Digest - June 30, 2022

Misled by metrics: 7 KPI mistakes IT leaders make

Metrics present an excellent opportunity for ownership and staff involvement, as well as continuous improvement and process control. “The key to correctly interpreting metrics is to engage your whole team and use the metrics to collectively improve processes,” says Paul Gelter, coordinator of CIO services at business and technology consulting firm Centric Consulting. When evaluating metrics, Gelter believes it’s essential to strike a balance between cost, quality, and service. Cost metrics, for example, could be tracked in completed tickets per individual, yet ticket quality could be degraded by rework/repeated tickets. “Service could then be impacted by the response time, backlog, and uptime,” he notes. It’s all about obtaining an optimal balance. Time really is money, so don’t squander precious hours scrutinizing irrelevant metrics. Clearly identify all goals before deciding which metrics to study. In most cases, metrics that don’t support or reflect future decision options are unnecessary and, worse yet, distracting and time-wasting. 


Cloud security risks remain very human

Researchers noted that the current view on cloud security has shifted the responsibility from providers to adopters. If you ask the providers that have always promoted a “shared responsibility” model, they have always required adopters to take responsibility for security on their side of the equation. However, if you survey IT workers and rank-and-file users, I’m sure they would point to cloud providers as the linchpins to good cloud security. It is also interesting to see that shared technology vulnerabilities, such as denial of service, communications service providers data loss, and other traditional cloud security issues ranked lower than in previous studies. Yes, they are still a threat, but postmortems of breaches reveal that shared technology vulnerabilities rank much lower on our list of worries. The core message is that the real vulnerabilities are not as exciting as we thought. Instead, the lack of security strategy and security architecture now top the list of cloud security “no-nos.” Coming in second was the lack of training, processes, and checks to prevent misconfiguration, which I see most often as the root causes of most security breaches. Of course, these problems have a direct link.


Private 5G growth stymied by pandemic, lack of hardware

"As a network technology, 5G has become more mainstream for consumer usage as networks have been upgraded," Hays says. "But it hasn't quite taken hold in the enterprise or for private networks due to a lack of available solutions and clarity around what use cases will take full advantage of 5G's capabilities." Having the right use cases is critical, says Arun Santhanam, vice president for telco at Capgemini Americas. "You want to mow your lawn, so you buy a lawnmower," Santhanam says. "You don't buy a lawnmower then say, 'Now, what can I do with it?' But that's the biggest mistake people make when adopting 5G. They get caught up in it. Now they have a private 5G network – so what do they do with it?" Enterprises that start out with use cases are much more successful, he says. "That's why we're recommending a lab environment where these things can be mocked up." Another challenge that companies can face is scalability. "If something works in a smaller setup, there's no guarantee that it will work in a bigger one," he says. Finally, there's the issue of interoperability.


Global file systems: Hybrid cloud and follow-the-sun access

Global file systems work by combining a central file service – typically on public or private clouds – with local network hardware for caching and to ensure application compatibility. They do this by placing all the storage in a single namespace. This will be the single, “gold” copy of all data. Caching and synching is needed to ensure performance. According to CTERA, one of the suppliers in the space, a large enterprise could be moving more than 30TB of data per site. Secondly, the system needs broad compatibility. The global file system needs to support migration from legacy, on-premise, NAS hardware. Operating systems and applications need to be able to access the global file system as easily as they did previously with NFS or SMB. The system also needs to be easy to use, ideally transparent to end-users, and able to scale. Few firms will be able to move everything to a new file system at once, so a global file system that can grow as applications move to it, is vital. ... As a cloud-based service, global file systems appeal to organisations that need to share information between sites – or with users outside the business perimeter in use cases that were often bolstered during the pandemic.


Google Launches Advanced API Security to Combat API Threats

API security teams also can use Advanced API Security’s pre-configured rules to identify malicious bots within API traffic. “Each rule represents a different type of unusual traffic from a single IP address,” Ananda wrote. “If an API traffic pattern meets any of the rules, Advanced API Security reports it as a bot.” This service is targeted at financial services institutions, which rely heavily on Google Cloud—four out of the top five U.S. banks ranked by the Federal Reserve are already using Apigee, Google noted in the blog post. The service is also designed to speed up the process of identifying data breaches by identifying bots that successfully resulted in the HTTP 200 OK success status response code. “Organizations in every region and industry are developing APIs to enable easier and more standardized delivery of services and data for digital experiences,” Ananda wrote. “This increasing shift to digital experiences has grown API usage and traffic volumes. However, as malicious API attacks also have grown, API security has become an important battleground over business risk.”

Friedman said the new AI system represented a breakthrough in the third revolution of software development: the use of AI in coding. As an AI pair programmer, it provides code-completion functionality and suggestions similar to IntelliSense/IntelliCode, though it goes beyond those Microsoft offerings with Codex, a new AI system developed by Microsoft partner OpenAI. ... Regarding the aforementioned Reddit comment, the reader had more to say on the question of AI replacing dev jobs: Well this specifically, not even close. To use this effectively you have to deeply understand every line of code. Using it also requires you to have been able to write whatever snippet was autocompleted yourself. But if it works well, it would be an amazing productivity tool that reduces context switching. On the other hand, that originally spent looking at documentation reduces you to more fully understand the library, so for more complex work, it might have hurt in the long run since you didn't look at the docs. 


Chip-to-Cloud IoT: A Step Toward Web3

Reliable software design is essential for IoT devices and other internet-connected devices. It keeps hackers from stealing your identification or duplicating your device for their ulterior motives. Chip-to-cloud delivers on all fronts. These chipset characteristics confer an extra security advantage. Each IoT node is cryptographically unique, making it nearly impossible for a hacker to impersonate it and access the more extensive corporate network to which it is connected. Chip-to-cloud speeds things up by eliminating the need for traffic delays between the logic program and the edge nodes that are ready to take action on the information. The chip-to-cloud architecture of the internet-of-things is secure by design. New tools are being developed to provide bespoke and older equipment with data mobility capabilities, just like the current IoT. Nevertheless, chip-to-cloud chipsets are always connected to the cloud. As a result, the availability of assets and the speed of digital communication across nodes, departments and facilities will be significantly improved. Chip-to-cloud IoT is a significant step forward in the evolution of the IoT toward Web3. 


IoT in Agriculture: 5 IoT Applications for Smart Agriculture

A high-tech, capital-intensive method of growing food sustainably and cleanly for people is known as intelligent farming. It is a component of contemporary ICT (Information and Communication Technologies) applied to agriculture. A system is created in IoT-based smart farming to automate the irrigation system and monitor the agricultural field using sensors (light, humidity, temperature, soil moisture, etc.). Farmers may monitor the condition of their lots from any location. Smart farming that is IoT-based is significantly more efficient than traditional farming. ... One of the most well-known Internet of Things applications in agriculture is precision agriculture or “precision farming.” Precision agriculture (PA) is a method of farm management that leverages information technology (IT) to guarantee that crops and soil receive the exact nutrients they require for maximum health and productivity. PA aims to ensure economic success, environmental preservation, and sustainability by assessing data produced by sensors and responding appropriately.


My Technical Writing Journey

My first general writing tip is to find a problem that bothers you. As an engineer, our day-to-day life should be full of questions. We can’t live without StackOverflow :)). If you can, then find a new job because it’s not a challenging job any more. The reason to find a problem close to you is that you know what is the core of this problem that you and other people like you want to get solved. You will show full empathy for your audience. ... The other approach is to narrow down your original scope when you have a broad idea. You are writing a blog post, not a book. Don’t make too ambitious goals. Otherwise, you will either make the article superficial which doesn’t create too much value, or the article will be too long to read. What I like is to find a unique entry point of the topic. For example, in the article How to Write User-friendly Command Line Interfaces in Python, I focus on how to make your CLI application more user-friendly. In 5 Python Tips to Work with Financial Data, I tied Python tips to only finance data. In this way, you always have a clear target reader group.


The Compounding (Business) Value of Composable Ecosystems

For anyone that has worked with end-user companies (companies that use, but don’t sell software) before, you know that while many of the broad challenges may be the same (I need to run containers), they each bring their own quirks (but we need static egress gateways for our firewall). A composable system helps tackle these common challenges while still allowing the choice to select components that meet specific requirements. The cloud native landscape is so large for exactly this reason, end users need choice to meet their precise business needs. Now that we understand a little more about what composability is, let’s see how it applies to the real world. ... Composability isn’t just about what projects and products your stack is made of, it also includes the composability of the ecosystem as a whole. The value of an ecosystem is not just the sum of its parts, but rather the interrelationships between the parts and how they can be assembled to meet the needs of the ecosystem and end users. The ideas, people, and tools that make up an ecosystem can be composable too. 



Quote for the day:

"It is, after all, the responsibility of the expert to operate the familiar and that of the leader to transcend it." -- Henry A. Kissinger

Daily Tech Digest - June 29, 2022

Why Data Is The Lifeblood Of Modern Organizations

AI – or machine learning, to be more specific – is powered by data (by which we generally mean information). This is because it uses information to “learn” how to make decisions. The more information it receives - such as, for example, road traffic conditions, in the case of a self-driving car – the better it can learn to do whatever it is supposed to do. Simply by watching examples of what happens when a vehicle travels on the road in different situations (environment, time of day, etc.), it gets better at understanding the decisions that have to be made to achieve its objective – traveling from A to B without hitting anything or hurting anyone. Likewise, the usefulness of IoT is down to its ability to transmit data between disparate devices that can then be used to make better decisions. When all of the machinery on a connected factory floor, for example, is talking to every other piece of machinery, it's possible to spot where performance issues are creating inefficiencies, as well as predict where malfunctions and breakdowns are likely to impair performance of the manufacturing operation as a whole.


How AI and Machine Learning will revolutionize the future of eCommerce

One of the numerous advantages of machine learning is the automation of many processes. Personalization is a prime illustration of this. The entire marketplace’s look may be altered using machine learning models for eCommerce to suit a specific buyer. AI personalization in eCommerce is primarily driven by user involvement, which improves the usability and appeal of the consumer experience (with more conversions and sales). Marketplaces want consumers to stay on their sites longer and make more purchases. To make it happen, they modify various website features to meet the specific user’s demands. ... The area of price adjustment is where you may see the full extent of machine learning’s advantages. eCommerce is one of those sectors where competition is quite severe, particularly in specialized consumer markets like hardware or beauty items—because of this, obtaining as many benefits as possible is essential if you want to draw in and keep clients. Price is one of the key motivators for 47% of eCommerce shoppers, according to a BigCommerce survey.


Hertzbleed explained

The first thing to note is that Hertzbleed is a new type of side-channel attack that relies on changes in CPU frequency. Hertzbleed is a real, and practical, threat to the security of cryptographic software. ... In short, the Hertzbleed attack shows that, under certain circumstances, dynamic voltage and frequency scaling (DVFS), a power management scheme of modern x86 processors, depends on the data being processed. This means that on modern processors, the same program can run at different CPU frequencies (and therefore take different wall-clock times). For example, we expect that a CPU takes the same amount of time to perform the following two operations because it uses the same algorithm for both. ... When running sustained workloads, CPU overall performance is capped by TDP. Under modern DVFS, it maximizes its performance by oscillating between multiple P-states. At the same time, the CPU power consumption is data-dependent. Inevitably, workloads with different power consumption will lead to different CPU P-state distribution.


Orlando will test if a physical city can be the center of the metaverse

The Orlando Economic Partnership (the region’s economic development group) is working with Unity to create a digital twin of the 800-square-mile metro area that will use new 3D technology to map out scenarios on everything from infrastructure to real estate to talent availability and more. The Unity rendering will capture 3D scans of exteriors and interiors of buildings, and it will help with the analysis of power grid expansions, traffic flow, stoplight timing, and climate change. The Orlando folks also participated in last week’s ringing of the Nasdaq bell in the metaverse by futurist Cathy Hackl, chief metaverse officer of Journey. Hackl is working with the city to help cement its reputation in the metaverse, and the bell ringing happened in both the physical stock exchange building and the metaverse. “I see the area from South Florida, which is focused on crypto, all the way up to Orlando, which is the simulation capital of the world, becoming one of the metaverse and Web3 innovation corridors to keep your eye on,” Hackl said.


Business AI solutions for beginners: What is vertical intelligence?

In the modern paradigm, one of your company’s greatest assets is the data generated by your employees, clients, and customers. And, sadly, most businesses are leaving money on the table by simply storing that data away somewhere to collect digital dust. The problem: How do you audit your company’s entire data ecosystem, deploy models to identify and infer actionable items, and turn those insights into positive business outcomes? The solution: vertical intelligence. Unfortunately, “vertical intelligence” is a buzzword. If you try to Google it, you’ll just get pages and pages of companies that specialize in it explaining why it’s important. Nobody really tells you what it is in the context of modern AI solutions. ... Vertical intelligence is the combination of human expertise and big data analytics applied with surgical precision and timing. As NowVertical Group’s COO, Sasha Grujicic, told Neural, we’re coming out of a once-in-a-century pandemic. And, unlike most industries, the world of AI had a positive surge during the COVID lockdowns.


One Day, AI Will Seem as Human as Anyone. What Then?

Even if no skills or capacities separate humans from artificial intelligence, there is still a reason and a means to fight the assessment that machines are people. If you attribute the same moral weight to something that can be trivially and easily digitally replicated as you do to an ape that takes decades to grow, you break everything—society, all ethics, all our values. If you could really pull off this machine moral status (and not just, say, inconvenience the proletariat a little), you could cause the collapse, for example, of our capacity to self-govern. Democracy means nothing if you can buy and sell more citizens than there are humans, and if AI programs were citizens, we so easily could. So how do we break the mystic hold of seemingly sentient conversations? By exposing how the system works. This is a process both “AI ethicists” and ordinary software “devops” (development and operations) call “transparency.” What if we all had the capacity to “lift the lid,” to change the way an AI program responds? Google seems to be striving to find the right set of filters and internal guardrails to make something more and more capable of human-like conversation. 


LaMDA Is An ‘AI Baby’ That Will Outrun Its Parent Google Soon

Compared to other chatbot conversations, LaMDA shows streaks of both consistency and randomness within a few lines of conversation. It maintains the logical connection even when the subject is changed without being prompted by a relevant question. ... That trait apart, the one other significant differentiating factor seems to be how it can reach out to external sources of information to achieve “factual groundedness”. A research paper published by Google with Cornell University, mentions that the model has been trained using around 1.56T words of public data and web text. Google very specifically mentions safety, in terms of the model’s consistency with a set of human values, bypassing harmful suggestions and resorting to unfair bias, and enhancing the model safety using a LaMDA classifier fine-tuned with a small amount of crowd worker-annotated data, which again leaves ample scope for ample debate and improvement as one crowdworker might think he is talking to LaMDA chatbot but he might be talking to another crowdworker.


How to Use Span in C# to Improve Application Performance

Using Span<> leads to performance increases because they are always allocated on the stack. Since garbage collection does not have to suspend execution to clean up objects with no references on the heap as often the application runs faster. Pausing an application to collect garbage is always an expensive operation and should be avoided if possible. Span<> operations can be as efficient as operations on arrays. Indexing into a span does not require computation to determine the memory address to index to. Another implementation of a Span in C# is ReadOnlySpan<>. It is a class exactly like Span<> other than that its indexer returns a readonly ref T, not a ref T. This allows us to use ReadOnlySpan<> to represent immutable data types such as String. Spans can use other value types such as int, byte, ref structs, bool, and enum. Spans can not use types like object, dynamic, or interfaces. ... Spans are not appropriate in all situations. Because we are allocating memory on the stack using spans, we must remember that there is less stack memory than heap memory. 


The making and value of metaverse worlds

Technology, media, and telecom companies, for instance, benefit directly by providing technological enablers, such as 5G, next-generation Wi-Fi or broadband networks, and new operating systems, app stores, and platforms to foster more content creation. Meanwhile, AR and VR tools are being actively explored and used in industries ranging from healthcare to industrial goods. Companies should start by familiarising their organisations with the potential impact of the metaverse. To start with, it’s important to do an assessment of how your business may be positively or negatively affected by the three biggest trends: the rise of m-worlds; improvements in AR, VR, and MR; and the expanding use of Web3 assets enabled by blockchain. Companies can then choose areas of focus in the metaverse and potential use cases for their own efforts. Finally, they can decide whether to become part of building this new infrastructure; monetise content and virtual assets; create B2B or B2C content, or even inward-facing experiences such as customer showrooms, virtual conferences, and remote collaboration solutions; or attract relevant audiences, both existing customers and prospects of interest.


CFOs and Automation: Battling Inflation, Increasing Employee Productivity

The CFO must also unlock the investment they've made in staff by providing them with tools that automate mundane, low-value work. “CFOs are fully aware that inflation drives up the cost of hiring and maintaining talent,” explains Karlo Bustos, vice president of professional services for Board International. “They must provide an environment where things aren't hard to do, in a very manual-based function such as invoicing collection activities, building out financial plans, and making financial models.” For CFOs to mitigate the expense of hiring talent and the manual nature of many tasks, they need to provide an environment of automation, collaboration, easily shared data, and enabling technologies. “Being proactive in automation is understanding the business,” he says. “CFOs are more inclined to invest in automation technology to deliver value, so that they can compress some of the inflationary pressures they have on their internal cost structure.” That perspective was shared by Wayne Slater, director of product marketing for Prophix, a performance management software provider. 



Quote for the day:

"Leadership is about change... The best way to get people to venture into unknown terrain is to make it desirable by taking them there in their imaginations." -- Noel Tichy