Daily Tech Digest - June 08, 2021

DeepMind scientists: Reinforcement learning is enough for general AI

A different approach to creating AI, proposed by the DeepMind researchers, is to recreate the simple yet effective rule that has given rise to natural intelligence. “[We] consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence,” the researchers write. This is basically how nature works. As far as science is concerned, there has been no top-down intelligent design in the complex organisms that we see around us. Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. Living beings that were better equipped to handle the challenges and situations in their environments managed to survive and reproduce. The rest were eliminated. This simple yet efficient mechanism has led to the evolution of living beings with all kinds of skills and abilities to perceive, navigate, modify their environments, and communicate among themselves. 


How To Become A Machine Learning Engineer?

New roles such as machine learning architecture are being created today. As the platform gets bigger, the machine learning engineer should handle the entire architecture and evolve to meet the needs of the data science, machine learning and data analytics organisations. The most important aspect of an ML engineer is the focus on production and model deployment — not just code that works, but code that functions in the real world, alongside understanding industry best practices to successfully integrate and deploy machine learning models. For starters, having a computer science, robotics, engineering and physics degree, along with competencies in C, C++, Java, Python, R, Scala, Julia, and other enterprise languages, helps. Plus, a stronger understanding of databases adds weightage. At experience levels, software engineers, software developers, and software architects are cut out for machine learning engineering roles. “It is almost a straight line from cloud architect to ML engineer and ML architect, as these two roles have so much overlap. If you understand data science and machine learning, you can understand models,” said Vashishta.


.NET Ranks High in Coding Bootcamp Report

Microsoft's .NET development framework ranked high in recent research about coding bootcamps, or "immersive technology education." With an average cost of about $14,000, these accelerated learning programs can last from six to 28 weeks -- averaging about 14 -- and promise to advance careers in both technical chops and bottom-line salary increases. Course Report studies the industry and presents its findings in annual reports that can help coders pick the best option, among some 500 around the world, with the choice of programming language being a primary factor. "Coding bootcamps employ teaching languages to introduce students to the world of programming," Course Report said in its latest study: Coding Bootcamps in 2021, an update of a 2020 report. "While language shouldn't be the main deciding factor when choosing a bootcamp, students may have specific career goals that guide them towards a particular language. In that case, first decide whether you'd prefer to learn web or mobile development. For the web, your main choices are Ruby, Python, LAMP stack, MEAN stack and .NET languages.


Amazon Sidewalk starts sharing your WiFi tomorrow, thanks

Amazon Sidewalk will create a mesh network between smart devices that are located near one another in a neighborhood. Through the network, if, for instance, a home WiFi network shuts down, the Amazon smart devices connected to that home network will still be able to function, as they will be borrowing internet connectivity from neighboring products. Data transfer between homes will be capped, and the data communicated through Amazon Sidewalk will be encrypted. Amazon smart device owners will automatically be enrolled into Amazon Sidewalk, but they can opt out before a June 8 deadline. That deadline has irked many cybersecurity and digital rights experts, as Amazon Sidewalk itself was not unveiled until June 1—just one week before a mass rollout. Jon Callas, director of technology projects at Electronic Frontier Foundation, told the news outlet ThreatPost that he did not even know about Amazon’s white paper on the privacy and security protocols of Sidewalk until a reporter emailed him about it. “They dropped this on us,” Callas said in speaking to ThreatPost. “They gave us seven days to opt out.”


Researchers Discover a Molecule Critical to Functional Brain Rejuvenation

Recent studies suggest that new brain cells are being formed every day in response to injury, physical exercise, and mental stimulation. Glial cells, and in particular the ones called oligodendrocyte progenitors, are highly responsive to external signals and injuries. They can detect changes in the nervous system and form new myelin, which wraps around nerves and provides metabolic support and accurate transmission of electrical signals. As we age, however, less myelin is formed in response to external signals, and this progressive decline has been linked to the age-related cognitive and motor deficits detected in older people in the general population. Impaired myelin formation also has been reported in older individuals with neurodegenerative diseases such as Multiple Sclerosis or Alzheimer’s and identified as one of the causes of their progressive clinical deterioration. ... The discovery also could have important implications for molecular rejuvenation of aging brains in healthy individuals, said the researchers. Future studies aimed at increasing TET1 levels in older mice are underway to define whether the molecule could rescue new myelin formation and favor proper neuro-glial communication.


Fujifilm refuses to pay ransomware demand, restores network from backups

Jake Moore, cybersecurity specialist at internet security firm ESET, said refusing to pay a ransom is “not a decision to be taken lightly.” Ransomware gangs often threaten to leak or sell sensitive data if payment is not made. However, Fujifilm Europe said it is “highly confident that no loss, destruction, alteration, unauthorised use or disclosure of our data, or our customers’ data, on Fujifilm Europe’s systems has been detected.” The spokesperson added: “From a European perspective, we have determined that there is no related risk to our network, servers and equipment in the EMEA region or that of our customers across EMEA. We presently have no indication that any of our regional systems have been compromised, including those involving customer data.” It is not clear if the ransomware gang stole Fujifilm data from the affected network in Japan. Fujifilm declined to comment when asked if those responsible had threatened to publish data if the ransom is not paid. According to security news site Bleeping Computer, Fujifilm was infected with the Qbot trojan last month.


Fixing Risk Sharing With Observability

The challenge is that one party, the developers, has more information than other parties. That information asymmetry is what creates unbalanced risk sharing. Coping with information asymmetry has led to all kinds of new collaborative models, starting with DevOps and evolving into DevSecOps and other permutations like BizDevSecOps. True collaboration has been hard to come by. Early DevOps efforts are often successful, but scaling beyond five to seven teams is difficult because teams lack the breadth of experience in IT operations or the SRE capacity to staff multiple product teams. The change velocity DevOps teams can achieve is often far greater than SREs and SecOps can absorb, making information asymmetry worse. If teams can’t maintain high levels of collaboration and communication, another option must be developed. Observability practices, like collecting all events, metrics, traces and logs, allow SREs and SecOps teams to interrogate applications about their behavior without knowing which questions they want to ask ahead of time. However, observability only works if applications, and the infrastructure they rely on, are instrumented.


How to Structure a Digital Transformation Project Team

The first role on a project team and arguably the most important role on a product team is your executive steering committee. This is typically a cross-functional group of executives within your organization that are responsible for setting the vision for the overall transformation. They’re responsible for approving scope changes, or any sort of material changes to the project plan or the budget. They’re ultimately responsible for setting the tone and the vision for the overall future state. The question to ponder on here is, how do we want our operation model to look and what do we want our organization to look like in the future? For lack of a better word, what do we want to be when we grow up? Before I get into the rest of the project team, something that’s very important even before we talk about other team roles is who should fill these other roles. The first thing you want to do is make sure that the steering committee is aligned on the overall transformation, vision, strategy, and objectives. If you start filling out the project team prematurely, when you don’t have that alignment 


Windows Container Malware Targets Kubernetes Clusters

After it compromises web servers, Siloscape uses container escape tactics to achieve code execution on the Kubernetes node. Prizmant said that Siloscape’s heavy use of obfuscation made it a chore to reverse-engineer. “There are almost no readable strings in the entire binary. While the obfuscation logic itself isn’t complicated, it made reversing this binary frustrating,” he explained. The malware obfuscates functions and module names – including simple APIs – and only deobfuscates them at runtime. Instead of just calling the functions, Siloscape “made the effort to use the Native API (NTAPI) version of the same function,” he said. “The end result is malware that is very difficult to detect with static analysis tools and frustrating to reverse engineer.” “Siloscape is being compiled uniquely for each new attack, using a unique pair of keys,” Prizmant continued. “The hardcoded key makes each binary a little bit different than the rest, which explains why I couldn’t find its hash anywhere. It also makes it impossible to detect Siloscape by hash alone.”


Machine learning at the edge: TinyML is getting big

Whether it's stand-alone IoT sensors, devices of all kinds, drones, or autonomous vehicles, there's one thing in common. Increasingly, data generated at the edge are used to feed applications powered by machine learning models. There's just one problem: machine learning models were never designed to be deployed at the edge. Not until now, at least. Enter TinyML. Tiny machine learning (TinyML) is broadly defined as a fast growing field of machine learning technologies and applications including hardware, algorithms and software capable of performing on-device sensor data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. ... First, the working definition of what constitutes TinyML was, and to some extent still is, debated. What matters is how devices can be deployed in the field and how they're going to perform, said Gousev. That will be different depending on the device and the use case, but the point is being always on and not having to change batteries every week. That can only happen in the mW range and below.



Quote for the day:

"If you're relying on luck, you have already given up." -- Gordon Tredgold

Daily Tech Digest - June 07, 2021

Why data storage isn’t a ‘one size fits all’ solution for IoT

Certain storage solutions are designed with the properties of endurance and resilience at the forefront. These offerings, including the highly reliable and industrial-grade e.MMC (embedded Multimedia Card) and UFS (Universal Flash Storage) embedded flash drives, can endure harsh environments, including those with extreme temperatures or vibrations, such as in a factory setting. One common use case that requires such solutions is in industrial-use drones. These drones, for example, are used by oil-rig workers to complete inspections more quickly and without risking worker safety. Similarly, search and rescue drones require high-performance in varying environments, such as those with fluctuating and extreme weather patterns. One way to achieve low latency is to bring compute and storage nearer to the place it is used, like the network edge, or to devices closer to the edge. This helps enable rapid real-time data transfers and analysis at the edge, where low latency is a fundamental requirement. Smart cities, for example, use and act on real-time data. Emergency services can communicate with traffic lights to synchronise and provide quicker and more direct access to critical locations whilst holding traffic at bay.


The future of storage resides at the intersection of the edge and cloud

Firstly, the digital leaders of the future can’t be built on the technology approaches of the past – IT needs to evolve to provide a technology foundation that accelerates digital innovation. Today’s storage infrastructure technology is designed to make hybrid cloud environments and data produced at the edge easier to deploy and manage. These purpose-built suites of solutions have evolved to fill an essential role in the data centre, providing ever-expanding levels of performance, capacity and resiliency for mission-critical workloads. Modern storage architecture is helping businesses succeed, by not only supporting current business needs, but also allowing scale to evolve IT infrastructure as business dynamics change. Therefore, organisations must refresh their storage infrastructure on a regular basis and keep up with the increased data demands by eliminating ageing infrastructure that is more susceptible to failures that cause outages/downtime. Modern storage infrastructure also frequently includes advanced data protection features that help ensure the on-premises data remains safe and secure. 


Artificial Intelligence: The Evolution Of Neural Networks

Artificial neural networks along with machine learning and artificial intelligence can flawlessly predict severe illnesses. For example, the output of waves of an ECG can be analyzed to understand a patient’s heart and predict heart attacks well in time. Similarly, with an adequate amount of data, dementia can be identified in the early stages by understanding and analyzing EEG patterns. Along with diagnosis, artificial neural networks and machine learning can work together for discovering drugs for the treatment of multiple serious illnesses. Furthermore, the introduction of autonomous cars has the potential of reducing traffic jams and accidents. Neural networks can be extensively used for predicting natural calamities like earthquakes, floods, and volcanic eruptions. Data like seismographs and atmospheric pressure can be collected on a daily basis to analyze and predict the occurrence of natural calamities. Additionally, neural networks can effectively predict changes in the weather and the climate. The future of artificial neural networks hints that chatbots are impacting the retail industry tremendously. 


Realistic Patch Management Tips, Post-SolarWinds

Security hygiene, including patching, is an essential part of defense, says Pironti. Nevertheless, he says, "We're fooling ourselves if we think we can defend ourselves against a nation-state attack [like the SolarWinds incident] while continuing to release code at the speed we do." Curtis Franklin, senior analyst of enterprise security management at Omdia, says companies must have patch management technology to help automate the process now, "because it's gotten really beyond human-scale at this point." Despite the recent high-profile example of a malicious software update, Pironti says companies should not shy away from deploying updates. "I think we would be doing ourselves a disservice if we started distrusting patches," he says. "I'd rather trust my vendors than question them when there's an exploit in the wild." He does, however, say it's fair to ask for better security hygiene in the software development lifecycle. "We've been trained as a society to accept flawed code," says Pironti. 


Saying goodbye to Internet Explorer might be more complicated than you realise

It's going to be odd to see IE go, as it's been part of Windows' internals for almost as long as it's been around, its Trident engine powering tools like Outlook's browser view and Windows' Help system. Even on systems that have the new Edge set as default, opening an email from Outlook in browser view opens it in Internet Explorer. That's because Outlook uses a technique that encapsulates HTML and any image resources in a single file. MHTML, "MIME encapsulation of aggregate HTML documents", was designed for a world where web pages delivered interactivity with applets or ActiveX controls or Flash, and where designers wanted that dynamic content to be part of an email message. It's a useful tool for building formatted emails, using familiar HTML authoring tools, but bundling all the necessary resources in a single archive that's attached to a message. It's an old technique, but one that's still in use. And with IE about to disappear, can you view those messages in a modern browser like Edge? The answer to that question is complicated. If you set the file associations in Windows 10 to support Outlook's MHTML, emails will open in Edge, but will only display as text and without active links.


How The Indian FinTech Is Using AI

CogNext also has an automated technology platform, Platform X, which provides ‘nimble, configurable, interactive, scalable and cost effective’ solutions for regulatory compliance. Such solutions allow financial institutions to control the risk they undertake and improve results in integrity and transparency. Project X works through a technological framework that enables processing customer data and calculations easily. Another element of CogNext is its AutoML solution which encourages domain and business experts working in financial institutions to use ML and AI to create business values. Teams can use this to develop advanced AI projects without coding or even understanding the underlying ML algorithms. ... Capital Float employs AI technologies along with human insight to facilitate risk assessment and marketing. AI and ML algorithms help the company comprehend the creditworthiness of applicants, allowing them to choose the right type of loans for the applicant. Capital Float also utilised AI models to better target customers in their marketing campaigns. In 2018, they acquired a leading Personal Finance Management App, Walnut, further pushing them into the credit-solutions industry.


Why Good Arguments Make Better Strategy

Many leaders avoid arguing about strategy at all costs. Arguing is equated with fighting and, at best, is considered an unproductive use of people’s time. This is a mistake. Arguing is the best way to do strategy, especially in groups, provided the arguments follow established rules of engagement that are rooted in the principles of deductive logic. Great strategy demands the exchange and vetting of ideas — both in its development and implementation. Listen to Patty McCord, former chief talent officer at Netflix, who asserted, “The main reason the company could continually reinvent itself and thrive, despite so many truly daunting challenges coming at us so fast and furiously, was that we taught people to ask, ‘How do you know that’s true?’ Or my favorite variant, ‘Can you help me understand what leads you to believe that’s true?’” Such questions spawned vigorous internal debates at Netflix that, McCord said, “helped cultivate curiosity and respect and led to invaluable learning both within the team and among functions." Why is debate so powerful? One reason lies in the fallibility of human reasoning. 


Making the Move to a SaaS Usage-Based Model

Many of those enterprise companies are working hard to add a subscription element to their business. Yet, others lack a strong imperative to move away from the on-premise model, either because their customers are satisfied with their current arrangements or IT imperatives prohibit it. The reality, though, is that the subscription model is quickly becoming a business necessity. A recent CIBC World Markets study found that, on an annual basis, SaaS stocks outperformed the mature software names, with an average stock price return of 83% vs. an average year-to-date mature software return of 22%. SaaS providers enjoy higher valuations because subscription earnings are more predictable, and companies that offer them can generate more revenue over the long haul. Eventually, many enterprise companies will also offer their products on a pure consumption or per-usage basis so customers can try new products for a very low cost (or even free) and expand usage as their needs grow, though usage-based consumption is still in the early stages. Moving to SaaS is not a “flip the switch” exercise. It requires a total shift in how management thinks, operates and compensates, and everyone -- from sales 


Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’

Bias is too narrow a term for the sorts of problems we’re talking about. Time and again, we see these systems producing errors – women offered less credit by credit-worthiness algorithms, black faces mislabelled – and the response has been: “We just need more data.” But I’ve tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they are built and trained to see the world. Training datasets used for machine learning software that casually categorise people into just one of two genders; that label people according to their skin colour into one of five racial categories, and which attempt, based on how people look, to assign moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately the politics of classification has become baked into the substrates of AI. ... Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful? 


The Best First Jobs for People Interested in Entrepreneurship

We are often told to not do "what everyone else is doing." But when it comes to startups, the crowd has a certain wisdom. Traction and exposure beget more traction and exposure, so it's not a bad idea to pay attention to what are currently considered hot startups. Read up on top startup listicles like those on LinkedIn or AngelList. Be agnostic as to what kind of job you can land at these hot companies. Jobs at rapidly growing companies evolve quickly, and titles mean little. Jump in, gain experience and make an impact. Please pay special attention to the quality of their funding sources, as this is an essential indicator of their stability and future financing availability. ... Understanding people's needs and learning how to address them truly is the foundational learning of most successful entrepreneurs. Sales positions involve pitching a product, which is helpful, but more importantly, they give you exposure to lots of external people. Especially when you are young and coming out of school, sales roles are great at quickly taking you out of your comfort zone and forcing you to provide value to real people. 



Quote for the day:

"Many men may see the King in a Kid but it takes a true leader to nurture it" -- Bernard Kelvin Clive

Daily Tech Digest - June 06, 2021

The computer will see you now: is your therapy session about to be automated?

AI research has not improved significantly since that review, she argues. “Based on the available evidence, I’m not optimistic.” Yet she added that a personalized approach could work better. Rather than assuming a bedrock of emotional states that are universally recognizable, algorithms could be trained on a single person over many sessions, including their facial expressions, their voice and physiological measures like their heart rate, while accounting for the context of those data. Then you’d have better chances of developing reliable AI for that person, Barrett says. If such AI systems eventually can be made more effective, ethical issues still have to be addressed. In a newly published paper, Torous, Depp and others argue that, while AI has the potential to help identify mental problems more objectively, and it could even empower patients in their own treatment, first it must address issues like bias. During the training of some AI programs, when they are fed huge databases of personal information so they can learn to discern patterns in them, white people, men, higher-income people, or younger people are often overrepresented. As a result they might misinterpret unique facial features or a rare dialect.


WhatsApp Just Gave 2 Billion Users A Reason To Stay

The specter of regulation continues to hang over Facebook and its Big Tech rivals, but this has raised a different regulatory question: At what point does a privately held communication platform become a utility. Social media can be turned on or off with little consequence. But replacing regulated mobile networks with a multinational “over the top” that is used by almost everyone is a different deal. WhatsApp’s biggest victory—the reason it’s now on almost all our phones—was its displacement of SMS as the world’s most popular, most ubiquitous, messaging tool. The nearest equivalent is Apple’s iMessage in some markets, especially the U.S. But iMessage isn’t a separate platform from core, regulated messaging. And, more to the point, it’s owned by a product giant not a data-based advertising giant. WhatsApp’s numbers are interesting. While its penetration in Europe is strong, in the developing world it’s staggering. In Kenya, South Africa, Nigeria, Argentina, Malaysia, Colombia and Brazil it has secured more than 90% of total adult internet users. In most countries, WhatsApp is now the market leader. Think that through when next reading about WhatsApp’s shift into payments and shopping.


Insurance to Mitigate the Risk of AI Systems Coming into View

It’s not clear that AI software suppliers guarantee the accuracy of their algorithms, or that insurance companies cover the risks associated with AI products. Having insurance against AI risk could smooth the path to AI adoption. Among manufacturers trying out AI, many are stuck in “pilot purgatory”–not yet successfully scaling digital transformation. “Greater support for businesses looking to implement new solutions could help to improve the adoption rate,” Yoskovitch stated. Insurers could help enterprises at these three stages of AI adoption, Yoskovitch suggests ... AI failure models are an evolving area of research. “It is not possible to provide prescriptive technological mitigations,” the authors stated. Cyber insurance comes the closest, but is not a perfect fit. If bodily harm occurs because of an AI failure, such as if the image recognition system on an autonomous car fails to perform in snow or frost conditions, cyber insurance is not likely to cover the damage, although it may cover the losses from the interruption of business that results, the authors suggest. 


‘Back to human’: Why HR leaders want to focus on people again

Delivering a great employee experience relies on the same principles used in design thinking for products and services. Like skilled designers, CHROs are starting with the customer and working backward. Where there is a customer journey with its associated pain points, so there are career journeys in every big organization, each with its own identifiable moments of frustration. One thing HR leaders can do along these lines is to harness the energy and insight of their colleagues to increase engagement among new hires and current employees. Cisco, for instance, launched a 24-hour “breakathon” with more than 800 employees that used design-thinking principles to identify the moments that matter most in the interactions between HR and employees. This session led to a complete redesign of onboarding: YouBelong@Cisco, a full prototype solution that targeted common pain points for people starting careers at the company. HR leaders want to use these technologies to help customize and track the needs of each individual on the employee journey, whether that means advancing educational efforts, helping customers and clients to solve problems, supporting the development of colleagues, or simply being part of a great team.


Plea To ML Researchers: Give Data Curation A Chance

Many experts believe data must be used in their natural form to give an unvarnished output. While there is no problem with this argument, Rogers said, it needs more elaboration. “In that case, the “natural” distribution may not even be what we want: e.g. if the goal is a question answering system, then the “natural” distribution of questions asked in daily life (with most questions about time and weather) will not be helpful,” wrote Rogers. She further added there is still a lot of research work that needs to be done before developers can study the world as it is. Some developers feel their data is large enough for their training set to encompass the ‘entire data universe’. Rogers said collecting all data is impossible as it will pose legal, ethical, and practical challenges Meanwhile, many are in favour of developing algorithmic alternatives to data curation. As per Rogers, this is a good possibility; however, having such solutions, in the current scenario, could be a complementary approach to data curation rather than completely replacing it. A few experts believe data curation is part of the process and should not become a task big enough to forget the original purpose of developing a model.


Ultra-high-density hard drives made with graphene store ten times more data

Graphene enables two-fold reduction in friction and provides better corrosion and wear than state-of-the-art solutions. In fact, one single graphene layer reduces corrosion by 2.5 times. Cambridge scientists transferred graphene onto hard disks made of iron-platinum as the magnetic recording layer, and tested Heat-Assisted Magnetic Recording (HAMR) – a new technology that enables an increase in storage density by heating the recording layer to high temperatures. Current COCs do not perform at these high temperatures, but graphene does. Thus, graphene, coupled with HAMR, can outperform current HDDs, providing an unprecedented data density, higher than 10 terabytes per square inch. “Demonstrating that graphene can serve as protective coating for conventional hard disk drives and that it is able to withstand HAMR conditions is a very important result. This will further push the development of novel high areal density hard disk drives,” said Dr Anna Ott from the Cambridge Graphene Centre, one of the co-authors of this study. A jump in HDDs’ data density by a factor of ten and a significant reduction in wear rate are critical to achieving more sustainable and durable magnetic data recording.


Implementing An Effective Intelligent Master Data Management Strategy

Since MDM is not a one-time implementation or cleansing exercise, business owners must own the data along with the business processes from various departments and units. The data governance process implemented must identify, measure, capture, and rectify data quality issues in the source system itself. In order to keep the strategy running, a formal model to manage said data as a strategic resource should comprise detailed business rules, data stewardship, data control, and compliance mechanisms. The governance aspect of data needs to be treated as part of daily responsibilities rather than a one-off initiative for it to be effective and supported by stakeholders or senior management. ... Before diving deep into the MDM implementation process, defining a future roadmap is crucial in showing how later stages will be accomplished, consistent with the strategic objectives of an organization. This ensures that your MDM exercise does not turn into a catastrophic event due to abject failures from structural flaws that corrupt your entire data system. Further, infuse upgrades, conduct regular testing on standard communication interfaces, and set benchmarks to quantify your KPI success, until they are proven to be stable before opening up the gates to the rest of your data stream.


Neuromorphic Chip: Artificial Neurons Recognize Biosignals in Real Time

The researchers first designed an algorithm that detects HFOs by simulating the brain’s natural neural network: a tiny so-called spiking neural network (SNN). The second step involved implementing the SNN in a fingernail-sized piece of hardware that receives neural signals by means of electrodes and which, unlike conventional computers, is massively energy efficient. This makes calculations with a very high temporal resolution possible, without relying on the internet or cloud computing. “Our design allows us to recognize spatiotemporal patterns in biological signals in real time,” says Giacomo Indiveri, professor at the Institute for Neuroinformatics of UZH and ETH Zurich. The researchers are now planning to use their findings to create an electronic system that reliably recognizes and monitors HFOs in real time. ... However, this is not the only field where HFO recognition can play an important role. The team’s long-term target is to develop a device for monitoring epilepsy that could be used outside of the hospital and that would make it possible to analyze signals from a large number of electrodes over several weeks or months.


Hardware buyers are scrambling to find chip shortage work-arounds

Because World Insurance runs most of its operations on a private cloud in their own data center, finding the servers they need to expand their operations is an ongoing battle. Before the chip shortage, they would primarily buy white label servers to add capacity. Now, they, like so many others, are sourcing servers from wherever they can find them. Many manufacturers are in the same boat, said Jens Gamperl, CEO of Sourcengine, an online marketplace for electronic components. Gamperl's customers are scrambling to find chips from any source—regardless of whether or not the supplier and its products have been vetted or not. ... To ensure some sort of quality control, manufacturers are asking Sourcengine to perform those functions. Price gouging also is a big issue. Parts that cost pennies pre-pandemic are now going for thousands of times more. "I came across, four weeks or five weeks ago, a situation where a 50 cent part was offered to us for $41," he said. For large businesses, these increased expenses shouldn't have a noticeable impact on the bottom line given the other expenses like travel went to zero, he said.


Hybrid work: How to prepare for the turnover tsunami

Among the multiple factors at play, according to the Prudential Financial survey, are employee concerns about career advancement. ... Additionally, the wide and rapid acceptance of remote work has opened up new job opportunities to work from anywhere. It's a perfect storm for creating some degree of turnover, says Brian Abrahamson, CIO and the associate laboratory director for communications and IT at the U.S. Department of Energy's Pacific Northwest National Lab. "We used to talk about the impacts of fear, uncertainty, and doubt on people. Add to this the impacts of burnout and isolation and you have a recipe for workforce chaos," Roberts says. "A question every CIO should be asking their people managers is, 'Are the recruiters who are trying to poach our people painting a better picture of a future working with their company than we are of ours?'" The time to start addressing anticipated turnover is now. "If you acknowledge that the risk factors affecting the likelihood of increased attrition in the near term are there, the first recommendation I would make is simple: Accept and prepare for it," says Selective Insurance CIO John Bresney.



Quote for the day:

"If you care enough for a result, you will most certainly attain it." -- William James

Daily Tech Digest - June 05, 2021

The rise of cybersecurity debt

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt. We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken. First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.


Zero trust: The good, the bad and the ugly

Right from the start, the name zero trust has unwelcome implications. On the surface, it appears that management does not trust employees or that everything done on the network is suspect until proven innocent. "While this line of thinking can be productive when discussing the security architecture of devices and other digital equipment, security teams need to be careful that it doesn't spill over to informing their policy around an employer's most valuable asset, its people," mentioned Jason Meller, CEO and founder at Kolide. "Users who feel their privacy is in jeopardy, or who do not have the energy to continually justify why they need access to resources, will ultimately switch to using their own personal devices and services, creating a new and more dangerous problem—shadow IT," continued Meller. "Frustratingly, the ill-effects of not trusting users often forces them to become untrustworthy, which then in turn encourages IT and security practitioners to advocate for more aggressive zero trust-based policies." In the interview, Meller suggested the first thing organizations looking to implement zero trust should do is form a working group with representatives from human resources, privacy experts and end users themselves.


From Boardroom To Service Floor: How To Make Cybersecurity An Organizational Priority Now

Of course, companies don’t just want to identify risk. They want to prevent relevant threats and secure their IT infrastructure. To achieve this, boardrooms, C-suite executives and cybersecurity teams will need to focus on the most potent risks — from insider threats to misconfigured databases — to enhance their defensive posture to meet the moment. This should begin by addressing your in-house vulnerabilities. With so many data breaches caused, in part, by employees, companies can defend data by enhancing their educational and oversight protocols. For instance, employee monitoring that harnesses user behavior analytics can empower companies to identify employees who might be vulnerable to a phishing scam, allowing leaders to direct teaching and training to mitigate the risk. (Full disclosure: Employee monitoring is among my company’s key provisions.) Similarly, cybersecurity software that restricts data access, movement and manipulation can ensure that data is available on a need-to-know basis, reducing opportunities for negligence or accidents to undermine data security.


How Testers Can Contribute to Product Definition

The approach to closing the understanding gap that has proven successful is "listening before talking". In practice, this means meeting the stakeholders, learning about their motivation and goals, building relationships and establishing a collaboration – basically, a feedback loop. Next was to explore the clients’ needs and their user personas by either talking to product manager(s), reading industry-related articles, or analyzing customer data because each user persona has a different goal and therefore a different task to complete in our product. For me, it’s essential to understand these differences to learn what is important to each one of them and aim for the specific quality characteristics when providing feedback on design, user experience, or product requirements. ... Practically, the shorter the feedback loop, the better. To make it shorter, I try to be there when the project starts to kick off and requirements are shaped, or when first prototypes are done, and generally be proactive by asking what’s the next important thing, inviting different stakeholders for pairing and collaborating closely to discover and share important information about the product.

API Security Depends on the Novel Use of Advanced ML & AI

By creating API-driven applications, we have exposed a much bigger attack surface. That’s number one. Number two, of course, we have made it challenging to the attackers, but the attack surface being so much bigger now needs to be dealt with in a completely different way. The older class of applications took a rules-based system as the common approach to solve security use cases. Because they just had a single application and the application would not change that much in terms of the interfaces it exposed, you could build in rules to analyze how traffic goes in and out of that application. Now, when we break the application into multiple pieces, and we bring in other paradigms of software development, such as DevOps and Agile development methodologies, this creates a scenario where the applications are always rapidly changing. There is no way rules can catch up with these rapidly changing applications. We need automation to understand what is happening with these applications, and we need automation to solve these problems, which rules alone cannot do.


Everything You Need To Know About India’s Centre for Artificial Intelligence and Robotics

CAIR is involved in research and development in AI, robotics, command and control, networking, information and communication security, along with the development of mission-critical products for battlefield communication and management systems. CAIR was appraised for Capability Maturity Model Integration (CMMI) Maturity Level 2 in 2014 and has ISO 9001:2015 certification. As part of the Defence Research and Development Organisation (DRDO), robotics was one of the priority areas of CAIR, said V S Mahalingam, former director, CAIR. Mahalingam joined DRDO in 1986 and served in Electronics & Radar Development Establishment (LRDE) till 2000 before he moved to CAIR. “Concentrating on the development of totally indigenous robots, the lab developed a variety of controllers and manipulators for Gantry, Scara, and other types of robots. With the experience gained from these initial years, the lab developed an autonomous guided vehicle (AGV). The expertise in control systems required for robotics was applied to the development of control laws for Tejas fighter,” Mahalingam added.


How do I become a network architect?

For the most part, network architects fall into department management roles overseeing teams of network engineers, system administrators, and perhaps application developers. The goal of a network architect is to design efficient, reliable, cost-effective network infrastructures that meet the long-term information technology and business goals of an organization. The trick is to accomplish those long-term goals while also permitting the organization to meet its short-term business goals and financial obligations. ... Successful network architects must be able to see the big picture regarding current and future information technology infrastructure, not only for the organization but for the industry and general business environment as well. Individuals fulfilling the job role must be able to produce a documented vision of network infrastructure now and in the future. Documentation is important because a network architect must be able to present their vision of current and future network needs and goals to C-level management, employees, and other stakeholders. They must be able to communicate why their vision is correct, and why those stakeholders should provide the resources necessary to bring that vision into fruition.


The Beauty of Edge Computing

The volume and velocity of data generated at the edge is a primary factor that will impact how developers allocate resources at the edge and in the cloud. “A major impact I see is how enterprises will manage their cloud storage because it’s impractical to save the large amounts of data that the Edge creates directly to the cloud,” says Will Kelly, technical marketing manager for a container security startup. “Edge computing is going to shake up cloud financial models so let’s hope enterprises have access to a cloud economist or solution architect who can tackle that challenge for them.” With billions of industrial and consumer IoT devices being deployed, managing the data is an essential consideration in any edge-to-cloud strategy. “Advanced consumer applications such as streaming multiplayer games, digital assistants and autonomous vehicle networks demand low latency data so it is important to consider the tremendous efficiencies achieved by keeping data physically close to where it is consumed,” says Scott Schober, President/CEO of Berkeley Varitronics Systems, Inc. It’s not much of a stretch to view edge as an integral computing of the fast evolving hybrid cloud.


Is STG Building a New Cybersecurity Powerhouse?

The consensus is STG will likely form either a complete new company out of its newly acquired businesses - hoping the sum of the parts will make STG a major player in the security space - or simply allow customers to pull together a security plan on an a la carte basis from STG's various parts. "You can see a future where we're going to have a clash of some really sophisticated industry heavyweights. You're going to have to compete with Microsoft; you're going to have to compete with Cisco. So if you're going to get in a fight with Microsoft and Cisco, you better bring a big stick. And it looks like they've now got a big stick," says Frank Dickson, program vice president at IDC. Peter Firstbrook, vice president and analyst with Gartner, believes STG is putting together a portfolio to deliver a one-stop shopping experience for those looking for a suite of cybersecurity products and solutions to protect their organization. "One trend they could take advantage of is the propensity of buyers to seek out fewer, more strategic vendors that have integrated solutions," Firstbrook says. "Eighty percent of buyers want to consolidate the number of security products and vendors to make their security operations more efficient."


Using Distributed Tracing in Microservices Architecture

Observability is monitoring the behavior of infrastructure at a granular level. This facilitates maximum visibility within the infrastructure and supports the incident management team to maintain the reliability of the architecture. Observability is done by recording the system data in various forms (tools) such as metrics, alerts (events), logs, and traces. These functions help in deriving insights into the internal health of the infrastructure. Here, we are going to discuss the importance of tracing and how it evolved to a technique called distributed tracing. Tracing is continuous supervision of an application’s flow and data progression often representing a track of a single user’s journey through an app stack. These make the behavior and state of an entire system more obvious and comprehensible. Distributed request tracing is an evolutionary method of observability that helps to keep cloud applications in good health. Distributed tracing is the process of following a transaction request and recording all the relevant data throughout the path of microservices architecture.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead." -- John Paul Warren

Daily Tech Digest - June 04, 2021

We’ve all had to learn new ways of leading and managing. But it’s important to keep the company culture alive, and the best workplace cultures are built on a foundation of trust and autonomy. Leaders can inadvertently undermine that by monitoring employee activities too closely and checking in too often. Micromanaging can hurt morale and stifle engagement, creativity, and innovation. So, if you’ve strayed into micromanager mode, it’s time to rebalance your approach. Keep in mind that one byproduct of a remote work schedule is that people may be tackling their workload outside the usual 9-5 schedule. Maybe they’re working later in the evening or earlier in the morning, so they’ll have time to deal with the kids’ schooling. As long as quality work is getting done, does that matter? As a manager, you need to figure out what’s important and get clarity on how changes in work routines affect business goals. Align the company vision with specific business goals and make sure that the way employees complete tasks (and how you interact with your team) support those goals. That’s how you can empower your people and maintain control where it counts without overdoing it.


Ancestry’s DevOps Strategy to Control Its CI/CD Pipeline

We had this DevOps culture of, “You own the code, so you own everything about deploying the code.” It was very much kind of like a startup mentality in terms of how we dealt with teams and DevOps. We had a large, centralized team that handled operations before that. As part of our technological transformation, we went from this large centralized operations team, where you throw your code over the wall and let them deploy it, to “You own your deploys.” In that process, we ended up basically not giving teams a whole lot of direction. ... We’ll get you the rules that you’ll need but the process is up to you. Teams started to share best practices; some teams would adopt other team’s best practices but in that kind of ecosystem there’s a lot of divergent paths you can take in how you deploy your code. That’s exactly what happened to us. We had a very fragmented ecosystem of processes. We started to have a lot of issues with that, which in turn led us to start to create policies but the policies weren’t very enforceable because we didn’t have any insight into how they were being applied in each team’s ecosystem.


Cryptocurrency dealers face closure for failing UK money laundering test

The governor of the Bank of England, Andrew Bailey, has told investors they should be prepared to lose all their money if they dabble in cryptocurrencies. Crypto assets are not covered by UK schemes that help investors reclaim cash when companies go bust. The European Central Bank has compared bitcoin’s meteoric rise to other financial bubbles such as “tulip mania” and the South Sea Bubble, which burst in the 17th and 18th centuries. However, banks including Goldman Sachs and Standard Chartered have launched their own cryptocurrency trading desks to take advantage of their rapid growth. The price of bitcoin has tumbled 40% since hitting all-time highs of more than $64,000 (£45,000) in mid-April. It was trading at $38,706 on Thursday afternoon. Only five crypto asset firms have been admitted to the FCA’s formal register so far. Another 90 firms are being assessed through the temporary permit scheme, which has been extended by nine months to allow the FCA to fully review all of the applications. While a further 51 have withdrawn their applications, some may not be covered by the FCA’s rules to register, meaning not all of them will be forced to shut down.


Conversation about the .NET type system

One thing to remember about the line between CLR and C# concepts is that CLR concepts provide the possibility to make some logic work, and C# concepts provide an interface for actual developers to work with. The C# concepts are an opinionated view on the possible programs that can be written using CLR concepts, and over time, the developers of the C# language have found ways for programmers to more clearly and succinctly represent intent on a fairly regular cadence, while the fundamental capabilities provided by CLR concepts are typically much more slow to evolve. ... Having classes that behave like values has always been possible in C# and there are many types in the framework that already do this. Generally though these classes fall into the category of “data” style objects, Tuple<> for example. It’s not good or bad to do this, it’s instead an exercise in evaluating trade offs: heap vs. stack, cost of passing / returning, etc … In the case of records we wanted to explore classes first because that is what most of the customers who valued records were already using. In future versions of the language we will allow for them to be declared as structs as well though to help customers who need to make different trade off decisions.


A Beginner’s Guide To Intel oneAPI

oneAPI allows data parallelism by leveraging two types of programming: API-based programming and direct programming. Within API-based programming, the algorithm for this parallel application development is hidden behind a system-provided API. oneAPI defines a set of APIs for commonly used data-parallel domains and provides library implementations across various hardware platforms. This enables a developer to maintain performance through multiple accelerators with minimal coding and tuning. ... oneDPL has algorithms and functions to speed up DPC++ kernel programming. The oneDPL library follows the C++ standard library’s functions and includes extensions to support data parallelism and extensions to simplify data-parallel algorithms. ... oneMKL is used for fundamental mathematical routines in high-performance computing and applications. This functionality is divided into dense linear algebra, sparse linear algebra, discrete Fourier transforms, random number generators, and vector math. ... oneDAL helps speed up big data analyses by providing optimised building blocks for algorithms for different stages of data analytics—preprocessing, transformation, analysis, modelling, validation, and decision making.


The growing pains of quantum computing

Large corporations now have the resources and relationships to access machines directly, and those machines are available from IBM, from Honeywell, and from other companies as well. It’s also now possible to subscribe to these machines, because some of the big cloud providers (Amazon Web Services and Azure are two examples) have taken initial steps towards offering what we might describe as quantum processing units alongside regular high-performance computing. Those early access agreements are now available for subscription, sometimes on a daily or even an hourly basis. And then beneath all of that, there is a clutch of start-ups like IQM in Finland, Alpine Quantum Technologies in Austria and Oxford Quantum Computing in the UK that are all on a very steep trajectory. Their processors will be available in a variety of ways. All of this means that a large corporate entity has a variety of ways of accessing quantum processors, and what we do is to pull all of that together. We have two distinguishing features. 


Don’t Let Employees Pick Their WFH Days

One concern is managing a hybrid team, where some people are at home and others are at the office. I hear endless anxiety about this generating an office in-group and a home out-group. For example, employees at home can see glances or whispering in the office conference room but can’t tell exactly what is going on. Even when firms try to avoid this by requiring office employees to take video calls from their desks, home employees have told me that they can still feel excluded. They know after the meeting ends the folks in the office may chat in the corridor or go grab a coffee together. The second concern is the risk to diversity. It turns out that who wants to work from home after the pandemic is not random. In our research we find, for example, that among college graduates with young children women want to work from home full-time almost 50% more than men. This is worrying given the evidence that working from home while your colleagues are in the office can be highly damaging to your career. In a 2014 study I ran in China in a large multinational we randomized 250 volunteers into a group that worked remotely for four days a week and another group that remained in the office full time.


Quantum computing: How should cybersecurity teams prepare for it?

For those organizations not involved in the development of quantum computers, preparatory actions are clear. We must urgently overcome our inability to keep existing computers secure; the quantum computer of the future will be of little use if we fail to break our dependency on legacy technology and poor management practices today. And as quantum computing improves, we must remain in front of our adversaries by leveraging new technology before it is adopted by those who wish to do us harm. ... Quantum computing is far too immature for any immediate real-world application or for us to see the benefits that its theory promises. We can make some educated guesses, though. Peter McMahon, Applied and Engineering Physics at Cornell University, writes of quantum computing capabilities, “We’re trying to find something useful we can do with a near-term quantum computer that would answer a question in quantum gravity, or high-energy physics more generally, that couldn’t be answered otherwise, for instance, can we simulate a model of a black hole on a quantum computer? Would that be useful? We don’t know if we’ll find anything, but it’s very interesting to try.”


Exchange Servers Targeted by ‘Epsilon Red’ Malware

The initial point of entry for the attack was an unpatched enterprise Microsoft Exchange server, from which attackers used Windows Management Instrumentation (WMI) – a scripting tool for automating actions in the Windows ecosystem, primarily used on servers – to install other software onto machines inside the network that they could reach from the Exchange server. It’s not entirely clear if attackers leveraged the infamous Exchange ProxyLogon exploit that was a major pain point for Microsoft earlier in the year. However, the unpatched server used in the attack was indeed vulnerable to this exploit, Brandt observed. During the attack, threat actors launched a series of PowerShell scripts, numbered 1.ps1 through 12.ps1, as well as some that were named with a single letter from the alphabet, to prepare the attacked machines for the final ransomware payload. The scripts also delivered and initiated the Epsilon Red payload, he wrote. The PowerShell scripts use a “rudimentary form of obfuscation” that didn’t hinder Sophos researchers’ analysis but “might be just good enough to evade the detection of an anti-malware tool that’s scanning the files on the hard drive for a few minutes, which is all the attackers really need,” Brandt noted.


How Hasura 2.0 Works: A Design and Engineering Look

Hasura can implement API caching for dynamic data automatically because Hasura’s metadata configuration has got detailed information about both the data models as well as the authz rules that in turn have information about which user can access what data. And this is very useful because, otherwise, developers often need to manually build web APIs that provide data access manually. Moreover, devs need to have deep domain knowledge so that they can also then build caching strategies that recognize what queries to the cache for which users/user groups, using caching stores like Redis to provide API caching. But this is just a part of the problem. The harder bit is cache invalidation. Developers use TTL-based caching to avoid worrying about caching invalidation vs consistency and let the API consumers deal with the inconsistency. Hasura, can, in theory, provide automated cache invalidation as well because Hasura has deep integrations into the sources of data and all access to this data can go through Hasura, or use the data source’s CDC mechanism. This part of the caching problem is similar to the “materialized view update” issue.



Quote for the day:

"Speak softly and carry a big stick; you will go far. -- Theodore Roosevelt

Daily Tech Digest - June 03, 2021

Preparing for the Upcoming Quantum Computing Revolution

The primary challenge to successful quantum computing lies within the technology itself. In contrast to classical computers, a quantum computer employs quantum bits, or qubits that can be both 0 and 1 at the same time, Jagannathan says. Such two-way states give quantum computer its power, yet even the slightest interaction with their surroundings can create distortion. "Correcting these errors, known as quantum error correction (QEC), is the biggest challenge and progress has been slower than anticipated," he says. There's also an important and possibly highly destructive aspect to quantum technology. "In addition to [a] wide range of benefits . . . it is also expected that [cybercriminals] will someday be able to break public key algorithms that serve as a basis for many cryptographic operations, like encryption or digital signatures," says Colin Soutar, managing director and cyber and strategic risk leader with Deloitte & Touche. "It's important that organizations carefully understand what exposure they may have to this [threat] so that they can start to take mitigation steps and not let security concerns overshadow the positive potential of quantum computing," says Soutar


DataOps Goes Mainstream As Atlan Lands Big

Data drives businesses growth and provides valuable insights prior to any conclusive decision making. As the enterprises scale, many challenges surface. For instance, working professionals, including data scientists, analysts, engineers, join in with different skill-sets and tools. Different people, different tools, different working styles – all these lead to a major bottleneck. Business segments are in dire need of data management to create contextual insights, now is the time to improve the quality and speed of data streaming into the organisation and get leadership commitment to support and sustain a data-driven vision across the company. This is where DataOps (data operations) come in handy. For instance, users can integrate their tables from Databricks with Atlan in a series of steps. Initially there are some prerequisites for establishing a connection between Atlan and Databricks Account: Go to the Databricks console and select “Clusters” from the left sidebar; Select the cluster you want to connect with Atlan. The cluster should be in a Running state for the Atlan crawler to fetch metadata from it; Click on “Advanced Options” in the “Configuration” tab.


Ransomware-as-a-service: How DarkSide and other gangs get into systems to hijack data

They're offering a service and they sit somewhere on the darker side of the internet and they offer what's called ransomware-as-a-service. They recruit affiliates or essentially sub-contractors who come in, who use their platform and then attack companies. And in the case of DarkSide, if you actually logged into the infrastructure and take a look at it, which is something we in the research community actively do, they had a very polished operation. They provide technical support for their affiliates who are breaking into companies. They provide monetization controls so that an affiliate can go in and see how much has been paid and what's outstanding and manage the money and all that. They're basically like companies and that's the challenge with ransomware now is it's moved from this sort of opportunistic thing where there were a few criminals scattered around the world doing this, to being these as-a-service operations that basically mean any enterprising criminal can get access to ransomware for, I've seen it for less than $100, and then use that to infect stuff. And obviously at the lower end, you're talking about things that aren't very sophisticated. The problem is it doesn't need to be sophisticated.


3 Methods to Reduce Overfitting of Machine Learning Models

The most robust method to reduce overfitting is collect more data. The more data we have, the easier it is to explore and model the underlying structure. The methods we will discuss in this article are based on the assumption that it is not possible to collect more data. Since we cannot get any more data, we should make the most out of what we have. Cross validation is way of doing so. In a typical machine learning workflow, we split the data into training and test subsets. In some cases, we also put aside a separate set for validation. The model is trained on the training set. Then, its performance is measured on the test set. Thus, we evaluate the model on previously unseen data. In this scenario, we cannot use a portion of the dataset for training. We are kind of wasting it. Cross validation allows for using every observation in both training and test sets. Ensemble models consist of many small (i.e. weak) learners. The overall model tends to be more robust and accurate than the individual ones. The risk of overfitting also decreases when we use ensemble models. The most commonly used ensemble models are random forest and gradient boosted decision trees.


IT’s silent career killer: Age discrimination

There is a widespread misconception in most industries that older employees are not “digital savvy” and are afraid to learn new things when it comes to technology, Miklas adds. “This assumption often results in decisions that can result in being sued for age discrimination, especially when the older worker is passed over for promotion, not hired, or terminated,” he says. One issue that arises more in age discrimination claims than other types of discrimination is an employer’s use of selection criteria for hiring, promotion, or layoff decisions that are susceptible to assumptions about age, says Raymond Peeler, director of the Coordination Division, Office of Legal Counsel at the U.S. Equal Employment Opportunity Commission (EEOC). “For example, an employer making determinations about workers based on ‘energy,’ ‘flexibility,’ ‘criticality,’ or ‘long-term concerns’ are susceptible to employer assumptions based on the age of the worker,” Peeler says. The EEOC is responsible for enforcing federal laws that make it illegal to discriminate against job applicants or employees because of a person’s race, color, religion, sex, national origin, disability, genetic information, or age.


Helium Network combines 5G, blockchain and cryptocurrency

Self-appointed as ‘The People’s Network,’ the existing LoRa-based Helium Network is live with 28,000+ hotspots devices deployed in over 3,800 cities worldwide, and there are 200,000+ hotspot devices on backorder from various manufacturers. Helium aims to take that experience and apply it to a new tier of 5G connectivity that is enabled by the unique CBRS spectrum, 3550 MHz-3700 MHz, which the US Federal Communications Commission has made available on three tiers of access, two of which are open to non-government users. Though the Priority Access level is licensed, General Authorized Access permits open access for the widest group of potential users and use cases. Using gateways from Helium partner FreedomFi, hotspot hosts – including individual consumers – will have the option to earn Helium’s own HNT cryptocurrency, in part by offloading carrier cellular traffic to their 5G hotspots. The FreedomFi Gateways will be compatible with Helium’s existing open-source blockchain and IoT network and will by default act as a Helium hotspot, also mining rewards for proof of coverage and data transfers on the IoT network. ­­


Abu Dhabi could achieve technological sovereignty thanks to quantum computing, says expert

In a panel discussion on whether UAE fintech is going global, Ellen Moeller, head of EMEA partnerships at Stripe, a San Francisco-based company that offers software to manage online payments, said key areas of interest for fintechs included ensuring that transactions were a “very frictionless experience” for consumers. “They’re used to calling a taxi from the touch of a button,” she said. “Why shouldn’t it be so simple when we’re talking about financial services? There’s a lot of opportunity for innovation for fintech. “The final piece is regulators and central banks embracing this innovation. I think we’ve only scratched the surface of fintech innovation and there’s lots more to come.” She added that the UAE “has all the right ingredients” to be a world-class technology and fintech hub, including a deep pool of talent and good investment climate. “We’ve seen the UAE do a remarkable job at fostering fintech,” she added. The region is seeing rapid growth in the number of tech start-ups in a range of fields, according to Vijay Tirathrai, managing director of Techstars, a company in the US state of Colorado, that supports tech start-ups.


A Quantum Leap for Quantum Computing

Quantum computers are expected to greatly outperform the most powerful conventional computers on certain tasks, such as modeling complex chemical processes, finding large prime numbers, and designing new molecules that have applications in medicine. These computers store quantum information in the form of quantum bits, or qubits — quantum systems that can exist in two different states. For quantum computers to be truly powerful, however, they need to be “scalable,” meaning they must be able to scale up to include many more qubits, making it possible to solve some challenging problems. “The goal of this collaborative project is to establish a novel platform for quantum computing that is truly scalable up to many qubits,” said Boerge Hemmerling, an assistant professor of physics and astronomy at UC Riverside and the lead principal investigator of the three-year project. “Current quantum computing technology is far away from experimentally controlling the large number of qubits required for fault-tolerant computing. ...”


Everyone Wants to Build a Cyber Range: Should You?

The most compelling reason for building a cyber range is that it is one of the best ways to improve the coordination and experience level of your team. Experience and practice enhance teamwork and provide the necessary background for smart decision-making during a real cyberattack. Cyber ranges are one of the best ways to run real attack scenarios and immerse the team in a live response exercise. An additional reason to have access to a cyber range is that many compliance certifications and insurance policies cite mandatory cyber training of various degrees. These are driven by mandates and compliance standards established by the National Institute of Standards and Technology and the International Organization for Standardization (ISO). With these requirements in place, organizations are compelled to free up budgets for relevant cyber training. There are different ways to fulfill these training requirements. Per their role in the company, employees can be required to undergo certifications by organizations such as the SANS Institute. 


The biggest diversity, equity and inclusion trends in tech

It’s important to take a look at the hiring strategy, and make sure that it attracts a diverse talent pool. Nabila Salem, president at Revolent Group, commented: “For the tech industry, there is more than just a moral imperative to solve the issue of missing equity. The lack of diversity within the tech sector also compounds upon a very real business challenge for organisations: a lack of available talent. “The consequences of not plugging this skills gap are of great concern: GDP growth across the G20 nations could be stunted by as much as $1.5 trillion over the next decade, if companies refuse to adapt to the needs that tech presents to us. “One way to overcome this is to invest in new, diverse talent to help solve both the skills gap and the lack of representation in tech. New, innovative programs like the Salesforce training provided by Revolent specialise in fuelling the market with the diverse, highly skilled new talent it so desperately needs. “There is an opportunity here, to address the issue of a lack of representation and an overall skills gap, all at once. Companies must be open to the idea that the average applicant is not as homogenous as they think. ...”


Shifting to Continuous Documentation as a New Approach for Code Knowledge

Continuously verifying documentation means making sure that the current state of the documentation matches the current state of the codebase, as the code evolves. In order to keep the docs in sync with the codebase, existing documentation needs to be checked against the current state of the code continuously and automatically. If the documentation diverges from the current state of the code, the documentation should be modified to reflect the updated state (automatically or manually). Continuously verifying documentation means that developers can trust their documentation and know that what’s written there is still relevant and valid, or at least get a clear indication that a certain part of it is no longer valid. In this sense, Continuous Documentation is very much like continuous integration - it makes sure the documentation is always correct, similar to verifying that all the tests pass. This could be done on every commit, push, merge, or any other version control mechanism. Without it, keeping documentation up-to-date and accurate is extremely hard, and requires manual work that needs to be repeated regularly.



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward