Daily Tech Digest - June 30, 2022

Misled by metrics: 7 KPI mistakes IT leaders make

Metrics present an excellent opportunity for ownership and staff involvement, as well as continuous improvement and process control. “The key to correctly interpreting metrics is to engage your whole team and use the metrics to collectively improve processes,” says Paul Gelter, coordinator of CIO services at business and technology consulting firm Centric Consulting. When evaluating metrics, Gelter believes it’s essential to strike a balance between cost, quality, and service. Cost metrics, for example, could be tracked in completed tickets per individual, yet ticket quality could be degraded by rework/repeated tickets. “Service could then be impacted by the response time, backlog, and uptime,” he notes. It’s all about obtaining an optimal balance. Time really is money, so don’t squander precious hours scrutinizing irrelevant metrics. Clearly identify all goals before deciding which metrics to study. In most cases, metrics that don’t support or reflect future decision options are unnecessary and, worse yet, distracting and time-wasting. 


Cloud security risks remain very human

Researchers noted that the current view on cloud security has shifted the responsibility from providers to adopters. If you ask the providers that have always promoted a “shared responsibility” model, they have always required adopters to take responsibility for security on their side of the equation. However, if you survey IT workers and rank-and-file users, I’m sure they would point to cloud providers as the linchpins to good cloud security. It is also interesting to see that shared technology vulnerabilities, such as denial of service, communications service providers data loss, and other traditional cloud security issues ranked lower than in previous studies. Yes, they are still a threat, but postmortems of breaches reveal that shared technology vulnerabilities rank much lower on our list of worries. The core message is that the real vulnerabilities are not as exciting as we thought. Instead, the lack of security strategy and security architecture now top the list of cloud security “no-nos.” Coming in second was the lack of training, processes, and checks to prevent misconfiguration, which I see most often as the root causes of most security breaches. Of course, these problems have a direct link.


Private 5G growth stymied by pandemic, lack of hardware

"As a network technology, 5G has become more mainstream for consumer usage as networks have been upgraded," Hays says. "But it hasn't quite taken hold in the enterprise or for private networks due to a lack of available solutions and clarity around what use cases will take full advantage of 5G's capabilities." Having the right use cases is critical, says Arun Santhanam, vice president for telco at Capgemini Americas. "You want to mow your lawn, so you buy a lawnmower," Santhanam says. "You don't buy a lawnmower then say, 'Now, what can I do with it?' But that's the biggest mistake people make when adopting 5G. They get caught up in it. Now they have a private 5G network – so what do they do with it?" Enterprises that start out with use cases are much more successful, he says. "That's why we're recommending a lab environment where these things can be mocked up." Another challenge that companies can face is scalability. "If something works in a smaller setup, there's no guarantee that it will work in a bigger one," he says. Finally, there's the issue of interoperability.


Global file systems: Hybrid cloud and follow-the-sun access

Global file systems work by combining a central file service – typically on public or private clouds – with local network hardware for caching and to ensure application compatibility. They do this by placing all the storage in a single namespace. This will be the single, “gold” copy of all data. Caching and synching is needed to ensure performance. According to CTERA, one of the suppliers in the space, a large enterprise could be moving more than 30TB of data per site. Secondly, the system needs broad compatibility. The global file system needs to support migration from legacy, on-premise, NAS hardware. Operating systems and applications need to be able to access the global file system as easily as they did previously with NFS or SMB. The system also needs to be easy to use, ideally transparent to end-users, and able to scale. Few firms will be able to move everything to a new file system at once, so a global file system that can grow as applications move to it, is vital. ... As a cloud-based service, global file systems appeal to organisations that need to share information between sites – or with users outside the business perimeter in use cases that were often bolstered during the pandemic.


Google Launches Advanced API Security to Combat API Threats

API security teams also can use Advanced API Security’s pre-configured rules to identify malicious bots within API traffic. “Each rule represents a different type of unusual traffic from a single IP address,” Ananda wrote. “If an API traffic pattern meets any of the rules, Advanced API Security reports it as a bot.” This service is targeted at financial services institutions, which rely heavily on Google Cloud—four out of the top five U.S. banks ranked by the Federal Reserve are already using Apigee, Google noted in the blog post. The service is also designed to speed up the process of identifying data breaches by identifying bots that successfully resulted in the HTTP 200 OK success status response code. “Organizations in every region and industry are developing APIs to enable easier and more standardized delivery of services and data for digital experiences,” Ananda wrote. “This increasing shift to digital experiences has grown API usage and traffic volumes. However, as malicious API attacks also have grown, API security has become an important battleground over business risk.”

Friedman said the new AI system represented a breakthrough in the third revolution of software development: the use of AI in coding. As an AI pair programmer, it provides code-completion functionality and suggestions similar to IntelliSense/IntelliCode, though it goes beyond those Microsoft offerings with Codex, a new AI system developed by Microsoft partner OpenAI. ... Regarding the aforementioned Reddit comment, the reader had more to say on the question of AI replacing dev jobs: Well this specifically, not even close. To use this effectively you have to deeply understand every line of code. Using it also requires you to have been able to write whatever snippet was autocompleted yourself. But if it works well, it would be an amazing productivity tool that reduces context switching. On the other hand, that originally spent looking at documentation reduces you to more fully understand the library, so for more complex work, it might have hurt in the long run since you didn't look at the docs. 


Chip-to-Cloud IoT: A Step Toward Web3

Reliable software design is essential for IoT devices and other internet-connected devices. It keeps hackers from stealing your identification or duplicating your device for their ulterior motives. Chip-to-cloud delivers on all fronts. These chipset characteristics confer an extra security advantage. Each IoT node is cryptographically unique, making it nearly impossible for a hacker to impersonate it and access the more extensive corporate network to which it is connected. Chip-to-cloud speeds things up by eliminating the need for traffic delays between the logic program and the edge nodes that are ready to take action on the information. The chip-to-cloud architecture of the internet-of-things is secure by design. New tools are being developed to provide bespoke and older equipment with data mobility capabilities, just like the current IoT. Nevertheless, chip-to-cloud chipsets are always connected to the cloud. As a result, the availability of assets and the speed of digital communication across nodes, departments and facilities will be significantly improved. Chip-to-cloud IoT is a significant step forward in the evolution of the IoT toward Web3. 


IoT in Agriculture: 5 IoT Applications for Smart Agriculture

A high-tech, capital-intensive method of growing food sustainably and cleanly for people is known as intelligent farming. It is a component of contemporary ICT (Information and Communication Technologies) applied to agriculture. A system is created in IoT-based smart farming to automate the irrigation system and monitor the agricultural field using sensors (light, humidity, temperature, soil moisture, etc.). Farmers may monitor the condition of their lots from any location. Smart farming that is IoT-based is significantly more efficient than traditional farming. ... One of the most well-known Internet of Things applications in agriculture is precision agriculture or “precision farming.” Precision agriculture (PA) is a method of farm management that leverages information technology (IT) to guarantee that crops and soil receive the exact nutrients they require for maximum health and productivity. PA aims to ensure economic success, environmental preservation, and sustainability by assessing data produced by sensors and responding appropriately.


My Technical Writing Journey

My first general writing tip is to find a problem that bothers you. As an engineer, our day-to-day life should be full of questions. We can’t live without StackOverflow :)). If you can, then find a new job because it’s not a challenging job any more. The reason to find a problem close to you is that you know what is the core of this problem that you and other people like you want to get solved. You will show full empathy for your audience. ... The other approach is to narrow down your original scope when you have a broad idea. You are writing a blog post, not a book. Don’t make too ambitious goals. Otherwise, you will either make the article superficial which doesn’t create too much value, or the article will be too long to read. What I like is to find a unique entry point of the topic. For example, in the article How to Write User-friendly Command Line Interfaces in Python, I focus on how to make your CLI application more user-friendly. In 5 Python Tips to Work with Financial Data, I tied Python tips to only finance data. In this way, you always have a clear target reader group.


The Compounding (Business) Value of Composable Ecosystems

For anyone that has worked with end-user companies (companies that use, but don’t sell software) before, you know that while many of the broad challenges may be the same (I need to run containers), they each bring their own quirks (but we need static egress gateways for our firewall). A composable system helps tackle these common challenges while still allowing the choice to select components that meet specific requirements. The cloud native landscape is so large for exactly this reason, end users need choice to meet their precise business needs. Now that we understand a little more about what composability is, let’s see how it applies to the real world. ... Composability isn’t just about what projects and products your stack is made of, it also includes the composability of the ecosystem as a whole. The value of an ecosystem is not just the sum of its parts, but rather the interrelationships between the parts and how they can be assembled to meet the needs of the ecosystem and end users. The ideas, people, and tools that make up an ecosystem can be composable too. 



Quote for the day:

"It is, after all, the responsibility of the expert to operate the familiar and that of the leader to transcend it." -- Henry A. Kissinger

Daily Tech Digest - June 29, 2022

Why Data Is The Lifeblood Of Modern Organizations

AI – or machine learning, to be more specific – is powered by data (by which we generally mean information). This is because it uses information to “learn” how to make decisions. The more information it receives - such as, for example, road traffic conditions, in the case of a self-driving car – the better it can learn to do whatever it is supposed to do. Simply by watching examples of what happens when a vehicle travels on the road in different situations (environment, time of day, etc.), it gets better at understanding the decisions that have to be made to achieve its objective – traveling from A to B without hitting anything or hurting anyone. Likewise, the usefulness of IoT is down to its ability to transmit data between disparate devices that can then be used to make better decisions. When all of the machinery on a connected factory floor, for example, is talking to every other piece of machinery, it's possible to spot where performance issues are creating inefficiencies, as well as predict where malfunctions and breakdowns are likely to impair performance of the manufacturing operation as a whole.


How AI and Machine Learning will revolutionize the future of eCommerce

One of the numerous advantages of machine learning is the automation of many processes. Personalization is a prime illustration of this. The entire marketplace’s look may be altered using machine learning models for eCommerce to suit a specific buyer. AI personalization in eCommerce is primarily driven by user involvement, which improves the usability and appeal of the consumer experience (with more conversions and sales). Marketplaces want consumers to stay on their sites longer and make more purchases. To make it happen, they modify various website features to meet the specific user’s demands. ... The area of price adjustment is where you may see the full extent of machine learning’s advantages. eCommerce is one of those sectors where competition is quite severe, particularly in specialized consumer markets like hardware or beauty items—because of this, obtaining as many benefits as possible is essential if you want to draw in and keep clients. Price is one of the key motivators for 47% of eCommerce shoppers, according to a BigCommerce survey.


Hertzbleed explained

The first thing to note is that Hertzbleed is a new type of side-channel attack that relies on changes in CPU frequency. Hertzbleed is a real, and practical, threat to the security of cryptographic software. ... In short, the Hertzbleed attack shows that, under certain circumstances, dynamic voltage and frequency scaling (DVFS), a power management scheme of modern x86 processors, depends on the data being processed. This means that on modern processors, the same program can run at different CPU frequencies (and therefore take different wall-clock times). For example, we expect that a CPU takes the same amount of time to perform the following two operations because it uses the same algorithm for both. ... When running sustained workloads, CPU overall performance is capped by TDP. Under modern DVFS, it maximizes its performance by oscillating between multiple P-states. At the same time, the CPU power consumption is data-dependent. Inevitably, workloads with different power consumption will lead to different CPU P-state distribution.


Orlando will test if a physical city can be the center of the metaverse

The Orlando Economic Partnership (the region’s economic development group) is working with Unity to create a digital twin of the 800-square-mile metro area that will use new 3D technology to map out scenarios on everything from infrastructure to real estate to talent availability and more. The Unity rendering will capture 3D scans of exteriors and interiors of buildings, and it will help with the analysis of power grid expansions, traffic flow, stoplight timing, and climate change. The Orlando folks also participated in last week’s ringing of the Nasdaq bell in the metaverse by futurist Cathy Hackl, chief metaverse officer of Journey. Hackl is working with the city to help cement its reputation in the metaverse, and the bell ringing happened in both the physical stock exchange building and the metaverse. “I see the area from South Florida, which is focused on crypto, all the way up to Orlando, which is the simulation capital of the world, becoming one of the metaverse and Web3 innovation corridors to keep your eye on,” Hackl said.


Business AI solutions for beginners: What is vertical intelligence?

In the modern paradigm, one of your company’s greatest assets is the data generated by your employees, clients, and customers. And, sadly, most businesses are leaving money on the table by simply storing that data away somewhere to collect digital dust. The problem: How do you audit your company’s entire data ecosystem, deploy models to identify and infer actionable items, and turn those insights into positive business outcomes? The solution: vertical intelligence. Unfortunately, “vertical intelligence” is a buzzword. If you try to Google it, you’ll just get pages and pages of companies that specialize in it explaining why it’s important. Nobody really tells you what it is in the context of modern AI solutions. ... Vertical intelligence is the combination of human expertise and big data analytics applied with surgical precision and timing. As NowVertical Group’s COO, Sasha Grujicic, told Neural, we’re coming out of a once-in-a-century pandemic. And, unlike most industries, the world of AI had a positive surge during the COVID lockdowns.


One Day, AI Will Seem as Human as Anyone. What Then?

Even if no skills or capacities separate humans from artificial intelligence, there is still a reason and a means to fight the assessment that machines are people. If you attribute the same moral weight to something that can be trivially and easily digitally replicated as you do to an ape that takes decades to grow, you break everything—society, all ethics, all our values. If you could really pull off this machine moral status (and not just, say, inconvenience the proletariat a little), you could cause the collapse, for example, of our capacity to self-govern. Democracy means nothing if you can buy and sell more citizens than there are humans, and if AI programs were citizens, we so easily could. So how do we break the mystic hold of seemingly sentient conversations? By exposing how the system works. This is a process both “AI ethicists” and ordinary software “devops” (development and operations) call “transparency.” What if we all had the capacity to “lift the lid,” to change the way an AI program responds? Google seems to be striving to find the right set of filters and internal guardrails to make something more and more capable of human-like conversation. 


LaMDA Is An ‘AI Baby’ That Will Outrun Its Parent Google Soon

Compared to other chatbot conversations, LaMDA shows streaks of both consistency and randomness within a few lines of conversation. It maintains the logical connection even when the subject is changed without being prompted by a relevant question. ... That trait apart, the one other significant differentiating factor seems to be how it can reach out to external sources of information to achieve “factual groundedness”. A research paper published by Google with Cornell University, mentions that the model has been trained using around 1.56T words of public data and web text. Google very specifically mentions safety, in terms of the model’s consistency with a set of human values, bypassing harmful suggestions and resorting to unfair bias, and enhancing the model safety using a LaMDA classifier fine-tuned with a small amount of crowd worker-annotated data, which again leaves ample scope for ample debate and improvement as one crowdworker might think he is talking to LaMDA chatbot but he might be talking to another crowdworker.


How to Use Span in C# to Improve Application Performance

Using Span<> leads to performance increases because they are always allocated on the stack. Since garbage collection does not have to suspend execution to clean up objects with no references on the heap as often the application runs faster. Pausing an application to collect garbage is always an expensive operation and should be avoided if possible. Span<> operations can be as efficient as operations on arrays. Indexing into a span does not require computation to determine the memory address to index to. Another implementation of a Span in C# is ReadOnlySpan<>. It is a class exactly like Span<> other than that its indexer returns a readonly ref T, not a ref T. This allows us to use ReadOnlySpan<> to represent immutable data types such as String. Spans can use other value types such as int, byte, ref structs, bool, and enum. Spans can not use types like object, dynamic, or interfaces. ... Spans are not appropriate in all situations. Because we are allocating memory on the stack using spans, we must remember that there is less stack memory than heap memory. 


The making and value of metaverse worlds

Technology, media, and telecom companies, for instance, benefit directly by providing technological enablers, such as 5G, next-generation Wi-Fi or broadband networks, and new operating systems, app stores, and platforms to foster more content creation. Meanwhile, AR and VR tools are being actively explored and used in industries ranging from healthcare to industrial goods. Companies should start by familiarising their organisations with the potential impact of the metaverse. To start with, it’s important to do an assessment of how your business may be positively or negatively affected by the three biggest trends: the rise of m-worlds; improvements in AR, VR, and MR; and the expanding use of Web3 assets enabled by blockchain. Companies can then choose areas of focus in the metaverse and potential use cases for their own efforts. Finally, they can decide whether to become part of building this new infrastructure; monetise content and virtual assets; create B2B or B2C content, or even inward-facing experiences such as customer showrooms, virtual conferences, and remote collaboration solutions; or attract relevant audiences, both existing customers and prospects of interest.


CFOs and Automation: Battling Inflation, Increasing Employee Productivity

The CFO must also unlock the investment they've made in staff by providing them with tools that automate mundane, low-value work. “CFOs are fully aware that inflation drives up the cost of hiring and maintaining talent,” explains Karlo Bustos, vice president of professional services for Board International. “They must provide an environment where things aren't hard to do, in a very manual-based function such as invoicing collection activities, building out financial plans, and making financial models.” For CFOs to mitigate the expense of hiring talent and the manual nature of many tasks, they need to provide an environment of automation, collaboration, easily shared data, and enabling technologies. “Being proactive in automation is understanding the business,” he says. “CFOs are more inclined to invest in automation technology to deliver value, so that they can compress some of the inflationary pressures they have on their internal cost structure.” That perspective was shared by Wayne Slater, director of product marketing for Prophix, a performance management software provider. 



Quote for the day:

"Leadership is about change... The best way to get people to venture into unknown terrain is to make it desirable by taking them there in their imaginations." -- Noel Tichy

Daily Tech Digest - June 28, 2022

What Gartner’s top cybersecurity predictions for 2022-23 reveal

Implied in the predictions is advice to focus not just on ransomware or any other currently trending type of cyberattack, but to prioritize cybersecurity investments as core to managing risks and see them as investments in the business. By 2025, 60% of organizations will use cybersecurity risk as a primary determinant in conducting third-party transactions and business engagements, according to Gartner‘s predictions. Doubling down with greater resilience across every threat surface is key. For example, while Gartner mentions zero-trust network access (ZTNA) in just one of the eight predictions, the core concepts of ZTNA and its benefits are reflected in most of the predictions. The predictions also note that investing in preventative controls is not enough, and that there needs to be a much higher priority placed on resilience. This is because threat surfaces grow faster than many organizations can gain visibility to and protect. By 2025, it is expected that 80% of enterprises will adopt a strategy to unify web, cloud services and private application access from a single vendor’s secured service edge (SSE) platform.


Don't Get Fired: How to Sell Max Cybersecurity to the C-Suite

"So some of the strategies that I use when I'm working with the C-level teams, the boards of directors, is I don't just give them a summarization or my opinion," continued O'Neill Sr. "I bring in events from insurance -- our insurance broker or our auditors -- and I say, 'Hey, can you give me a few examples of other customers where their cybersecurity insurance didn't get renewed because of some event? Or can you give me an example of a audit that failed because proper levels of protection weren't put in place?' "And I articulate those things to the CEOs and the boards of directors. Not in long-worded descriptions, but basically like, 'Hey, you know, if you look at this year, and our actual insurance broker says that they have processed claims for a billion dollars this year because of security events where malware has been involved.' And then I show them data where I say, 'Okay, of the 100 events ... about 15 percent of those companies never survived. They did not return back to business.' Okay. 


How tech companies are responding to the talent gap

The savviest organizations are taking on the onus of training talent themselves, increasingly hiring people straight out of school, according to Jean-Marc Laouchez, president of the Korn Ferry Institute. These firms are also trying to instill a culture of continuous learning and training. “Constant learning — driven by both workers and organizations — will be central to the future of work, extending far beyond the traditional definition of learning and development,” Laouchez wrote. In that light, coding bootcamps have become talent pools for organizations looking for skills-based applicants over more traditional college graduates. Graduates from coding boot camps reported a quick ROI, higher salaries, and STEM career opportunities, according to recent survey of 3,800 US graduates of university coding bootcamps by US education company 2U and Gallup. All graduates reported they saw their salaries increase by a median of $11,000 one year after graduation, with those who moved from non-STEM to STEM jobs after graduation seeing the highest income growth.


Strategies for adopting data stewardship without a CDO

If the company has already concluded that it can’t hire a full-time CDO, the next best thing is to look at individuals in the company who have some of the skills or who have backgrounds and talents that would enable them to skill up quickly. The first place to look is in the database group. The database administrator should be charged with oversight of the development of the entire corporate data architecture. When an overall data architecture is in place, you have a structure that ensures all of your various data repositories and processes can interact with each other in enterprise-wide data exchanges and ensures you have the tools, such as APIs (application programming interfaces) and ETL (extract, transform, load), to facilitate integration. This also means eradicating stand-alone data silos that might exist within the company. ... The database group can work hand in hand with the IT security group to make sure all data is properly secured and that it meets corporate governance standards, even if the data is incoming from third-party vendors.


Secure everything, not just the weakest link

When looking at the security of links between a company and its business partners, BCS volunteer Petra Wenham says: “We must include the company’s IT in that statement and the security of a partner’s IT system.” Junade Ali, a technologist with an interest in software engineering management and computer security, points to the OAuth vulnerability as an example of the risks organisations face across their supply chains when they connect or make use of third-party systems. “In the recent past, I’ve worked on changing practices across the industry when it comes to password security,” he says. “I developed the anonymity models used by Have I Been Pwned, the developer tooling needed to improve password security practices and published scientific studies used to change the industry understanding of the best practice.” What Ali learned was that the reuse of compromised credentials from one low-value website (say, a pizza restaurant) often cascades to compromising someone’s online banking. He adds: “The message here is clear – security isn’t purely within our fiefdom and we depend on others to keep our data safe.”


How APTs Are Achieving Persistence Through IoT, OT, and Network Devices

Due to the low security and visibility of these devices, they are an ideal environment for staging secondary attacks on more valuable targets inside the victim's network. To do this, an attacker will first get into the company's network through traditional approaches like phishing. Attackers can also gain access by targeting an Internet-facing IoT device such as a VoIP phone, smart printer, or camera system, or an OT system such as a building access control system. Since most of these devices use default passwords, this type of breach is often trivial to achieve. Once on the network, the attacker will move laterally and stealthily to seek out other vulnerable, unmanaged IoT, OT, and network devices. Once those devices have been compromised, the attacker just needs to establish a communication tunnel between the compromised device and the attacker's environment at a remote location. In the case of UNC3524, attackers used a specialized version of Dropbear, which provides a client-server SSH tunnel and is compiled to operate on the Linux, Android, or BSD variants that are common on those devices.


Transforming advanced manufacturing through Industry 4.0

Organizational problems often involve low buy-in and a lack of concentration from leadership as a business attempts to see a digital transformation through. That hampers the effort’s potential success and long-term viability. Inadequate knowledge of digital capabilities and a lack of organizational talent can prevent broader buy-in and properly scaled transformative efforts. Technology roadblocks commonly include low support from partners in scaling deployment while facing multiple platform choices, which hinders an organization’s ability to move quickly into new territory. The transformation’s starting point can also stall when leaders aren’t convinced of their ability to increase the size and scope of the digital architecture they choose for implementation. AI companies have tried many approaches to overcome these barriers and realize improved performance through digital manufacturing transformations. An examination of advanced manufacturing lighthouses reveals two critical reasons that their transformations succeeded: first, they chose the right use cases; second, they looked for ways that those use cases could reinforce one another.


Cybersecurity and the metaverse: Identifying the weak spots

The metaverse is designed to function through the use of digital avatars that each user creates for themselves. Ostensibly, this avatar will be both unique and secure, which will allow the real human it represents to use their personally identifiable information (PII) and other sensitive information to make purchases, do work and even receive healthcare. In addition, through the avatar, the user can interact with others in the digital space, including working with colleagues in a virtual office. The concern, however, is that because the avatar is, fundamentally, the skeleton key to your private offline information, from your PII to your financial accounts, if a hacker gains access to your avatar, then they can open the door to your entire life. This holds the potential to take identity theft to an unprecedented level. Identity theft in the metaverse can also take another, and perhaps even more sinister, turn, however. If hackers gain control of your avatar, they may well engage in behaviors that can ruin your relationships and reputation, and may even put your offline safety at risk.


The promise of edge computing comes down to data

Move computing power to where the data is. Determining whether edge or cloud is optimal for a particular workflow or use case can cause analysis paralysis. Yet the truth is the models are complementary, not competing. “The general rule of thumb is that you’re far better moving compute to the data than vice versa,” said Robert Blumofe, executive vice president and chief technology officer at Akamai. “By doing so, you avoid back hauling, which hurts performance and is expensive.” Consider an e-commerce application that orchestrates actions like searching a product catalog, making recommendations based on history, or tracking and updating orders. “It makes sense to do the compute where that data is stored, in a cloud data warehouse or data lake,” Blumofe said. The edge, on the other hand, lends itself to computing on data that’s in motion — analyzing traffic flow to initiate a security action, for example. Go heavy on experimentation. It’s still early days in edge computing, and most companies are at the beginning of the maturity curve, evaluating how and where the model can have the most impact.


The surveillance-as-a-service industry needs to be brought to heel

The problem here is that surveillance technologies such as these have been commercialized. It means capabilities that historically have only been available to governments are also being used by private contractors. And that represents a risk, as highly confidential tools may be revealed, exploited, reverse-engineered and abused. As Google said: “Our findings underscore the extent to which commercial surveillance vendors have proliferated capabilities historically only used by governments with the technical expertise to develop and operationalize exploits. This makes the Internet less safe and threatens the trust on which users depend.” Not only this, but these private surveillance companies are enabling dangerous hacking tools to proliferate, while giving these high-tech snooping facilities available to governments — some of which seem to enjoy spying on dissidents, journalists, political opponents, and human rights workers. An even bigger danger is that Google is already tracking at least 30 spyware makers, which suggests the commercial surveillance-as-a-service industry is strong. 



Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown

Daily Tech Digest - June 27, 2022

Collaboration Is a Key Skill. So Why Aren’t We Teaching It?

Beyond an organization’s bottom line, positive workplace relationships matter to individuals’ well-being. Whether respondents’ relationships with their most liked, least liked, or most influential collaborators were being rated, the quality of their collaborative relationships positively predicted job satisfaction, good mental health, and positive attitudes about workplace collaboration. Having even one low-quality collaborative relationship may drive undesirable outcomes, including poor mental health that contributes to burnout, and job dissatisfaction that contributes to turnover. Given that collaborative relationship quality is important both to individuals and to bottom lines, why don’t organizations provide more opportunities for people to develop collaborative skills? It could be that companies do, in fact, make development opportunities available but that individuals fail to see those opportunities as either available or related to collaboration. Or it could be that such offerings are precluded by underlying assumptions that people pick up relationship skills via osmosis rather than direct training, that they are just naturally “good” or “not good” at relationships, or that these skills cannot be learned.


The Best Raspberry Pi 4 Alternatives

The Tinkerboard’s processor is more powerful than the one you’ll find in the Pi 4 B, so you may be able to get even more ambitious with your builds. However, when they’re available, you can get Pi 4s with up to 8 GB of RAM, which is more than the 2 GB that the Tinkerboard offers. Then there is the price. You can pick up a Tinkerboard S R2.0 on Amazon for $149.99 — which is more than some of the inflated Pi 4s are currently selling for. In short, this is a good option if you need more processing power or you can’t find a Pi 4, even at a premium. ... The Linux-powered ODROID XU4Q benefits from “Samsung Exynos5422 Cortex-A15 2Ghz and Cortex-A7 Octa core CPUs” along with 2GB of DDR3 RAM. On paper, this potentially makes the UX4Q the most powerful micro-computer on this list. It also comes with a very large heatsink attached, presumably to soak up some of the heat from its relatively powerful processor. With regards to ports, ODROID has managed to cram two USB 3.0, one USB 2.0, a Gigabit Ethernet, and an HDMI port onto the tiny board.


Google’s AI spotlights a human cognitive glitch: mistaking fluent speech for fluent thought

The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs. The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions. However, in the case of AI systems, it misfires – building a mental model out of thin air. A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.” The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible.


Cloudflare's outage was human error. There's a way to make tech divinely forgive

What's the lesson? It's not news that people make mistakes, and the more baroque things become, the harder they are to guard against. It's just that what gets advertised on BGP isn't just routes but things crapping out, and when you're Cloudflare that's what the C in CDN becomes. It's not the first time it's happened, nor the last, and one trusts the company will hire a choreographer to prevent further op-on-op stompfests. Yet if it happens, and keeps happening, why aren't systems more resilient to this sort of problem? You can argue that highly dynamic and structurally fluid routing mechanisms can't be algorithmically or procedurally safeguarded, and we're always going to live in the zone where the benefits of pushing just a bit too hard for performance is worth the occasional chaotic hour. That's defeatist talk, soldier. There's another way to protect against the unexpected misfire, other than predicting or excluding. You'll be using it already in different guises, some of which have been around since the dawn of computer time: state snapshotting. 


Stronger Security for Smart Devices To Efficiently Protect Against Powerful Hacker

AttacksResearchers are racing against hackers to develop stronger protections that keep data safe from malicious agents who would steal information by eavesdropping on smart devices. Much of the effort into preventing these “side-channel attacks” has focused on the vulnerability of digital processors. Hackers, for example, can measure the electric current drawn by a smartwatch’s CPU and use it to reconstruct secret data being processed, such as a password. MIT researchers recently published a paper in the IEEE Journal of Solid-State Circuits, which demonstrated that analog-to-digital converters in smart devices, which encode real-world signals from sensors into digital values that can be processed computationally, are vulnerable to power side-channel attacks. A hacker could measure the power supply current of the analog-to-digital converter and use machine learning algorithms to accurately reconstruct output data. Now, in two new research papers, engineers show that analog-to-digital converters are also susceptible to a stealthier form of side-channel attack, and describe techniques that effectively block both attacks. 


What is AI governance?

It can be helpful to break apart the governance of AI algorithms into layers. At the lowest-level, close to the process are the rules of which humans have control over the training, retraining and deployment. The issues of accessibility and accountability are largely practical and implemented to prevent unknowns from changing the algorithm or its training set, perhaps maliciously. At the next level, there are questions about the enterprise that is running the AI algorithm. The corporate hierarchy that controls all actions of the corporation is naturally part of the AI governance because the curators of the AI fall into the normal reporting structure. Some companies are setting up special committees to consider ethical, legal and political aspects of governing the AI. Each entity also exists as part of a larger society. Many of the societal rule making bodies are turning their attention to AI algorithms. Some are simply industry-wide coalitions or committees. Some are local or national governments and others are nongovernmental organizations. All of these groups are often talking about passing laws or creating rules for how AI can be leashed.


Continuous Operations is the Unsung Hero of DevOps

For continuous operations to be successful, you must have infrastructure automation in place. In fact, continuous operations cannot exist without infrastructure automation. The true value that arises from the combination of infrastructure automation and continuous operations is that it gives back IT operations teams their time so they can focus on more complex reasoning or problem-solving tasks while the system simply continuously scans and fixes errors. ... The very essence of DevOps is constant change. Continuous operations may ultimately return your infrastructure to its desired state, but philosophically, it’s about being able to quickly and securely identify anomalies, apply fixes and modify your infrastructure as quickly as possible. It’s not as simple as flipping a switch or pushing a line of code. As the demand for security and compliance swells, continuous operations will have to build in these elements to be de facto checkboxes in the loop. At Puppet, we’ve baked continuous compliance and security into our infrastructure automation products to ensure continuous operations are indeed continuous. 


OT security: Helping under-resourced critical infrastructure organizations

The biggest problem in OT security is the cultural divide between IT and OT. IT security is a mature field, with standards, frameworks, and an abundance of mature and emerging technologies. The OT security field is much less mature, lacking people with OT security experience, established best practices and frameworks, and with a much smaller selection of security technologies. Historically, IT and OT have worked independently on security, with OT engineers overseeing security in the OT environment where it was not as critical due to lack of or limited connectivity to the internet and to the enterprise. Today, however, most OT environments are connected to the enterprise IT environment and to the internet. The benefits of Industry 4.0 and digital transformation in OT has accelerated connectivity in OT, including to cloud environments. The prevalence of converged IT/OT environments makes it imperative that IT and OT teams work together to secure them. The problem is that cultural divide. The good news is that it can be conquered, by bringing the two teams together to create an OT security strategy that is owned jointly by both teams. 


Why to Create a More Data-Conscious Company Culture

A data culture creates standards for employee data literacy and provides open and transparent access to what assets exist, as well as standards for curation, quality, and certification so employees have a shared understanding of the data within an organization. “This will not resolve the silos, but it will create a transparent view of the entire enterprise data fabric,” Wills explains. He adds some of the approaches Alation has seen work well include things like providing an enterprise-wide data literacy training and certification program, so that everyone shares the same perspective, vocabulary, and basic analytic skills. Each functional business unit and area should include data training as part of their employee onboarding as it provides a review of an organization’s authoritative data and data-related assets, the process used to maintain them, and sets expectations for how employees should participate. “Also, recognition: Nothing motivates more and sends a stronger message than employees seeing each other be recognized and rewarded for their contributions,” Wills says. 


Valuing commercial real estate in the metaverse

One of the most obvious ways how digital real estate diverges from its physical counterpart is in the limited utility that it provides. This, of course, is because digital products do not require storage, nor do the digital people who populate the metaverse need to be kept comfortable or warm in indoor venues. However, the sense of discovery in the search for goods and services remains genuine within the metaverse, and it is in this way that virtual utility provides the most value. As businesses are free to design their purchased real estate however they want to, they can dedicate their efforts to creating the most eye-catching and exciting facades that will entice users to discover more about their property – and ultimately the goods and services they have on offer. Therefore, it is not so much about the utility of a piece of real estate that determines its valuation – but more about its network power. For example, how easy is it to discover this real estate? How well connected is it? What is the purchase power of the people coming to the piece of real estate? In this sense, valuing real estate in the metaverse, I’d argue, is a lot more like valuing a website, i.e., how many clicks does it get? 



Quote for the day:

"Leadership - mobilization toward a common goal." -- Gary Wills

Daily Tech Digest - June 26, 2022

Only 3% of Open Source Software Bugs Are Actually Attackable, Researchers Say

Making the determination of what's attackable comes by looking beyond the presence of open source dependencies with known vulnerabilities and examining how they're actually being used, says Manish Gupta, CEO of ShiftLeft. "There are many tools out there that can easily find and report on these vulnerabilities. However, there is a lot of noise in these findings," Gupta says. ... The idea of analyzing for attackability also involves assessing additional factors like whether the package that contains the CVE is loaded by the application, whether it is in use by the application, whether the package is in an attacker-controlled path, and whether it is reachable via data flows. In essence, it means taking a simplified threat modeling approach to open source vulnerabilities, with the goal of drastically cutting down on the fire drills. CISOs have already become all too familiar with these drills. When a new high-profile supply chain vulnerability like Log4Shell or Spring4Shell hits the industry back channels, then blows up into the media headlines, their teams are called to pull long days and nights figuring out where these flaws impact their application portfolios, and even longer hours in applying fixes and mitigations to minimize risk exposures.


The Power and Pitfalls of AI for US Intelligence

Depending on the presence or absence of bias and noise within massive data sets, especially in more pragmatic, real-world applications, predictive analysis has sometimes been described as “astrology for computer science.” But the same might be said of analysis performed by humans. A scholar on the subject, Stephen Marrin, writes that intelligence analysis as a discipline by humans is “merely a craft masquerading as a profession.” Analysts in the US intelligence community are trained to use structured analytic techniques, or SATs, to make them aware of their own cognitive biases, assumptions, and reasoning. SATs—which use strategies that run the gamut from checklists to matrixes that test assumptions or predict alternative futures—externalize the thinking or reasoning used to support intelligence judgments, which is especially important given the fact that in the secret competition between nation-states not all facts are known or knowable. But even SATs, when employed by humans, have come under scrutiny by experts like Chang, specifically for the lack of scientific testing that can evidence an SAT’s efficacy or logical validity.


Data Modeling and Data Models: Not Just for Database Design

The prevailing application-centric mindset has caused the fundamental problems that we have today, Bradley said, with multiple disparate copies of the same concept in system after system after system after system. Unless we replace that mindset with one that is more data-focused, the situation will continue to propagate, he said. ... Models have a wide variety of applicable uses and can present different levels of detail based on the intended user and context. Similarly, a map is a model that can be usedlike models are used in a business. Like data models, there are different levels of maps for different audiences and different purposes. A map of the counties in an election will provide a different view than a street map used for finding an address. A construction team needs a different type of detail on a map they use to connect a building to city water, and a lesson about different countries on a globe uses still another level of detail targeted to a different type of user. Similarly, some models are more focused on communication and others are used for implementation.


Microverse IDE Unveiled for Web3 Developers, Metaverse Projects

"With Microverse IDE, developers and designers collaboratively build low-latency, high-performance multiuser Microverse spaces and worlds which can then be published anywhere," the company said in a June 21 news release. As part of its Multiverse democratization effort, Croquet has open sourced its Microverse IDE Metaverse world builder and some related components under the Apache License Version 2.0 license so developers and adopters can examine, use and modify the software as needed. ... The California-based Croquet also announced the availability of its multiplane portal technology, used to securely connect independent 3D virtual worlds developed by different parties, effectively creating the Metaverse from independent microservices. These connections can even span different domains, the company said, thus providing safe, secure and decentralized interoperability among various worlds independent of the large technology platforms. "Multiplane portals solve a fundamental problem in the Metaverse with linking web-based worlds in a secure and safe way," the company said.


5 Firewall Best Practices Every Business Should Implement

Changes that impact your IT infrastructure happen every single day. You might install new applications, deploy additional network equipment, grow your user base, adopt non-traditional work practices, etc. As all this happens, your IT infrastructure’s attack surface will also evolve. Sure, you can make your firewall evolve with it. However, making changes to your firewall isn’t something you should take lightly. A simple mistake can take some services offline and disrupt critical business processes. Similarly, you could also expose ports to external access and compromise their security. Before you apply changes to your firewall, you need to have a change management plan. The plan should specify the changes you intend to implement and what you hope to achieve. ... Poorly configured firewalls can be worse than having no firewall, as a poorly installed firewall will give you a false sense of security. The same is true with firewalls without proper deployment planning or routine audits. However, many businesses are prone to these missteps, resulting in weak network security and a failed investment.


Debate over AI sentience marks a watershed moment

While it is objectively true that large language models such as LaMDA, GPT-3 and others are built on statistical pattern matching, subjectively this appears like self-awareness. Such self-awareness is thought to be a characteristic of artificial general intelligence (AGI). Well beyond the mostly narrow AI systems that exist today, AGI applications are supposed to replicate human consciousness and cognitive abilities. Even in the face of remarkable AI advances of the last couple of years there remains a wide divergence of opinion between those who believe AGI is only possible in the distant future and others who think this might be just around the corner. DeepMind researcher Nando de Freitas is in this latter camp. Having worked to develop the recently released Gato neural network, he believes Gato is effectively an AGI demonstration, only lacking in the sophistication and scale that can be achieved through further model refinement and additional computing power. The deep learning transformer model is described as a “generalist agent” that performs over 600 distinct tasks with varying modalities, observations and action specifications. 


Data Architecture Challenges

Most traditional businesses preserved data privacy by holding function-specific data in departmental silos. In that scenario, data used by one department was not available or accessible by another department. However, that caused a serious problem in the advanced analytics world, where 360-degrees customer data or enterprise marketing data are everyday necessities. Companies, irrespective of their size, type, or nature of business, soon realized that to succeed in the digital age, data had to be accessible and shareable. Then came data science, artificial intelligence (AI), and a host of related technologies that transformed businesses overnight. Today, an average business is data-centric, data-driven, and data-powered. Data is thought of as the new currency in the global economy. In this globally competitive business world, data in every form is traded and sold. For example, 360-degrees customer data, global sales data, health care data, and insurance history data are all available with a few keystrokes. A modern Data Architecture is designed to “eliminate data silos, combining data from all corners of the company along with external data sources.” 


One in every 13 incidents blamed on API insecurity – report

Lebin Cheng, vice president of API security at Imperva, commented: “The growing security risks associated with APIs correlate with the proliferation of APIs, combined with the lack of visibility that organizations have into these ecosystems. At the same time, since every API is unique, every incident will have a different attack pattern. A traditional approach to security where one simple patch addresses all vulnerabilities doesn’t work with APIs.” Cheng added: “The proliferation of APIs, combined with the lack of visibility into these ecosystems, creates opportunities for massive, and costly, data leakage.” ... By the same metric, professional services were also highly exposed to API-related problems (10%-15%) while manufacturing, transportation, and utilities (all 4-6%) are all in the mid-range. Industries such as healthcare have less than 1% of security incidents attributable to API-related security problems. Many organizations are failing to protect their APIs because it requires equal participation from the security and development teams, which have historically have been somewhat at odds. 


What Are Deep Learning Embedded Systems And Its Benefits?

Deep learning is a hot topic in machine learning, with many companies looking to implement it in their products. Here are some benefits that deep learning embedded systems can offer: Increased Efficiency and Performance: Deep learning algorithms are incredibly efficient, meaning they can achieve high-performance levels even when running on small devices. This means that deep learning embedded systems can be used to improve the performance of existing devices and platforms or to create new devices that are powerful and efficient. Reduced Size and Weight: Deep learning algorithms are often very compact and can be implemented on small devices without sacrificing too much performance or capability. This reduces the device’s size and weight, making it more portable and easier to use. Greater Flexibility: Deep learning algorithms can often exploit complex data sets to improve performance. This means deep learning embedded systems can be configured to work with various data sets and applications, giving them greater flexibility and adaptability.


State-Backed Hackers Using Ransomware as a Decoy for Cyber Espionage Attacks

The activity cluster, attributed to a hacking group dubbed Bronze Starlight by Secureworks, involves the deployment of post-intrusion ransomware such as LockFile, Atom Silo, Rook, Night Sky, Pandora, and LockBit 2.0. "The ransomware could distract incident responders from identifying the threat actors' true intent and reduce the likelihood of attributing the malicious activity to a government-sponsored Chinese threat group," the researchers said in a new report. "In each case, the ransomware targets a small number of victims over a relatively brief period of time before it ceases operations, apparently permanently." Bronze Starlight, active since mid-2021, is also tracked by Microsoft under the emerging threat cluster moniker DEV-0401, with the tech giant emphasizing its involvement in all stages of the ransomware attack cycle right from initial access to the payload deployment. ... The key victims encompass pharmaceutical companies in Brazil and the U.S., a U.S.-based media organization with offices in China and Hong Kong, electronic component designers and manufacturers in Lithuania and Japan, a law firm in the U.S., and an aerospace and defense division of an Indian conglomerate.



Quote for the day:

"Leadership has a harder job to do than just choose sides. It must bring sides together." -- Jesse Jackson

Daily Tech Digest - June 25, 2022

What Are CI And CD In DevOps And How Do They Work?

The purpose of continuous delivery is to put a packed item into production. The whole delivery process, including deployment, is automated using a CD. CD tasks may involve provisioning infrastructure, tracking changes (ticketing), deploying artifacts, verifying and tracking those changes, and ensuring that these changes do not occur if any problems arise. Certain parts of continuous delivery will be used by some firms to help them maintain their operational duties. A good example is employing a CD pipeline to handle infrastructure deployment. Some organizations will leverage their CD pipelines to coordinate infrastructure setup and configuration using configuration management automated processes such as Ansible, chef, or puppet. A CI/CD pipeline may appear to be overhead, but it is not. It is essentially an executable definition of the procedures that any developer must take in order to deliver a new edition of a software product. Without an automated pipeline, developers would have to complete these processes manually, which would be significantly less productive.


Why You Need to Be an Influencer Brand and the 3 Rs of Becoming One

Of course, brands creating content has been around for decades. Content marketing is creating and distributing valuable, relevant and consistent content to attract/retain an audience, driving profitable action. The difference is that influencer brands have shifted their entire orientation to a consumer-centric integrated marketing communications (IMC) mindset. Influencer brands go beyond blogs, infographics, eBooks, testimonials, and how-to guides that appeal to the head. They have learned to appeal to the heart of their audience. This comes from seeing the world from the target's perspective. A shift that can be seen following the three Rs of influence to direct brand content creation. For example, the focus of Yeti Coolers' content and engagement isn't selling coolers. It is selling a lifestyle that the coolers help enable. For example, they organize products so customers can shop by activity. Images and copy lead with stories of the adventures their audience can have with the gear — fishing, hunting, camping, by the coast, in the snow, on the ranch and in the rodeo arena.


3 certification tips for IT leaders looking to get ahead

If leveraged properly, certifications can also assist IT decision-makers in their key leadership responsibilities. For example, Puneesh Lamba, CIO of Shahi Exports, an apparel manufacturing company, acknowledges that “certifications have helped him perform better in board meetings, thereby making it easier to get approvals on IT spending.” “Typically, CIOs from large technology companies have strong IT skills but poor communications skills, while it’s just the opposite for CIOs in customer facing B2C companies. These technology leaders need to get certified in areas that they lack. While CIOs push their team to get certified, they need to come out of their comfort zones and follow suit,” says Chandra. But the benefits of certifications won’t accrue automatically. IT leaders seeking to advance their skills and careers need to build a strategy aimed at squeezing the maximum value out of what certifications can offer. Here, four CIOs share their experiences in pursuing certifications and offer advice on how to make the most of these valuable career advancement tools as an IT leader.


Magnetic superstructures as a promising material for 6G technology

The race to realize sixth generation (6G) wireless communication systems requires the development of suitable magnetic materials. Scientists from Osaka Metropolitan University and their colleagues have detected an unprecedented collective resonance at high frequencies in a magnetic superstructure called a chiral spin soliton lattice (CSL), revealing CSL-hosting chiral helimagnets as a promising material for 6G technology. The study was published in Physical Review Letters. Future communication technologies require expanding the frequency band from the current few gigahertz (GHz) to over 100 GHz. Such high frequencies are not yet possible, given that existing magnetic materials used in communication equipment can only resonate and absorb microwaves up to approximately 70 GHz with a practical-strength magnetic field. Addressing this gap in knowledge and technology, the research team led by Professor Yoshihiko Togawa from Osaka Metropolitan University delved into the helicoidal spin superstructure CSL.


Don’t fall into the personal brand trap

While you can try to emulate the positive qualities of branding, the truth is that rulebook wasn’t designed with you in mind. Brands are static creations, while you must be a dynamic participant in your life and career. Brands let the consensus of others dictate their values and meaning, while you must discover both for yourself. Brands chase consistency by reorienting to match the expectations of “consumers,” while you must have reserve room to grow and develop without a sense of self-fraudulence. Take the personal-branding prescription too far, and you run the risk of cementing your identity to the brand. New passions are unexplored. Fears and struggles must be ignored over concerns of not being “on brand.” And your life endeavors are filtered through the lens of marketability rather than the pursuit of their intrinsic worth.All of which can be counterproductive to your sense of authenticity. As one meta-analysis found, authenticity had a positive relationship with both well-being and engagement. But to achieve that, you must meet yourself as you are today, not who you were 10 years ago when you settled on your personal brand.


Is NextJS a Better Choice Than React in 2022?

If you know, React, you kind of know NextJS. This is because Next is a React framework.
You have components just like in React. CSS has a different naming convention, but that's the biggest change. The reason Next is so good is that it gives you options. If you want a page to have good SEO, you can use ServerSideProps. If you want to use CSR, you can use UseEffect to call your APIs, like React. Adding typescript to your Next project also is very simple. You even have a built-in router and don't have to use React router. The option to choose between CSR, SSR, and SSG is what makes Next the best. You even get a free trial on Vercel for your Next project. Now that you're convinced that you should Next.js, you might wonder how to change your existing website to Next. Next.js is designed for gradual adoption. Migrating from React to Next is pretty straightforward and can be done slowly by gradually adding more pages. You can configure your server so that everything under a specific subpath points to the Next.js app. If your site is abc.com, you can configure abc.com/about to serve a Next.js app. This has been explained really well in the Next.js docs.


How machine learning AI is going to change gaming forever

Obviously, machine learning techniques have broad implications for almost every sector of life, but how they will intersect across gaming has potentially some of the broadest implications for Microsoft as a business. One problem the video game industry generally faces right now pertains to the gap between expectations and investment. Video games are becoming increasingly complex to make, fund, and manage, as they explode in exponential complexity and graphical fidelity. We've seen absolutely insane Unreal Engine demos that showcase near-photorealistic scenes and graphics, but the manual labor involved to produce a full game based on some of these principles is truly palpable both in terms of time, and expense. What is typically thought of as "AI" in a gaming context generally hasn't been AI in the true sense of the word. Video game non-player characters (NPCs) and enemies generally operate on a rules-based model that often has to be manually crafted by a programmer. Machine learning models are importantly far more fluid, able to produce their own rules within parameters, and respond dynamically to new information on the fly.


Reflections about low-code data management

As more people began using the Internet, better tools and resources became available. Today, the market is full of low-code Content Management Systems (CMS) and drag-and-drop website builders (WordPress, HubSpot, Shopify, Squarespace, etc.) that make it easy to create a professional-looking website without any coding knowledge. While there are still a handful of very specific use cases where you would need to code a website from scratch, organizations realized that using a low-code CMS or drag-and-drop builder was a much better option in the vast majority of cases. This shift has led to a dramatic decrease in the amount of time and effort required to build a website. In fact, you can now create an entire website in just a few hours using these low-code tools. With every great shift comes some level of resistance. At first, web developers were skeptical of (or outright opposed to) low-code tools for the following reasons:Fear of Replacement: Developers saw these tools as a threat to their jobs. Power & Flexibility: Developers were unconvinced that they would be powerful, flexible, or customizable enough to produce the same quality of work. 


Inside the Metaverse: Architects See Opportunity in a Virtual World

“The metaverse is not an escape, and it's not a video game,” Patrik Schumacher, principal at Zaha Hadid Architects (ZHA), told RECORD. “It will become the immersive internet for corporations, for education, for retail, and also for socializing and networking in more casual arenas. Everything we are doing in the real world could potentially be substituted or augmented or paralleled with interactions in the metaverse.” ZHA was one of the first major firms to take the plunge into metaverse design. In early March, the firm announced that it would build an entire metaverse city—a digital version of the unrecognized, and as yet unbuilt, sovereign state “Liberland'' that was founded seven years ago by the right-wing Czech politician Vít Jedlička. “At the time, I was very frustrated with planning regulations and overbearing political constraints on city development,” says Schumacher, who has long fought against government intervention in urban development.


5 social engineering assumptions that are wrong

Users may be more inclined to interact with content if it appears to originate from a source they recognize and trust, but threat actors regularly abuse legitimate services such as cloud storage providers and content distribution networks to host and distribute malware as well as credential harvesting portals, according to Proofpoint. “Threat actors may prefer distributing malware via legitimate services due to their likelihood of bypassing security protections in email compared to malicious documents. Mitigating threats hosted on legitimate services continues to be a difficult vector to defend against as it likely involves implementation of a robust detection stack or policy-based blocking of services which might be business relevant,” the report read. ... There’s a tendency to assume that social engineering attacks are limited to email, but Proofpoint detected an increase in attacks perpetuated by threat actors leveraging a robust ecosystem of call center-based email threats involving human interaction over the telephone. “The emails themselves don’t contain malicious links or attachments, and individuals must proactively call a fake customer service number in the email to engage with the threat actor. ...”



Quote for the day:

"The ability to stay calm and polite, even when people upset you, is a superpower." -- Vala Afshar