Daily Tech Digest - March 31, 2020

Nasscom seeks relief for technology startups for business continuity
Some of the important measures demanded from the government to help the startups include a rental subsidy for workspaces used by startups which are regulated/owned/managed by government agencies; blanket suspension of all deadlines including tax payment deadlines and filing deadlines until at least four weeks post lifting of all city lockdown. The industry body said that the pandemic has created a significant liquidity crunch for the sector and to ensure timely payment of salaries to employees, the banks may voluntarily provide for an overdraft facility or interest-free and equity convertible funding to startups. Nasscom has demanded a one-time provident fund opt-out option for employees. "The Government can consider providing an option to the employees for a onetime PF opt-out option for the next financial year 2020-21. In such a case, both the employee and employer's contributions towards the PF may be transferred directly to the employee. This will result in an increase in the take-home pay of the employees," said Nasscom in the representation made to the government.

Reference Architecture for Healthcare – Introduction and Principles

The good news is that information technology can solve problems of fragmentation, through smart process management, and the exchange of standardized information, to name a few. A Blueprint for the Healthcare Industry: The aim must be to help organizations provide health services with better outcomes, at lower cost, and improved patient and staff experience. We need a toolbox that is flexible, adaptable to individual needs, and that can serve a network of partners that team up to deliver care. The Patient Perspective: As a patient with a chronic disease, I monitor my health condition daily. I manage my medication with the help of my devices and adjust my lifestyle accordingly. My care providers should work with me to manage my disease. The Health Professional Perspective: As a Healthcare professional, I need to team up to coordinate delivery of care. I create, use, and share information with other care providers within a given episode of care, and across different treatment periods. The Architect and Planner Perspective: As a user of the reference architecture, I need an easy-to-use toolbox that is readily available and helps me in my daily work. It needs to align with the regulations of our industry.

Maybe the biggest challenge we face as a society is our ability to unlearn – to let go of – outdated concepts and beliefs in order to adopt new approaches. Our everyday lives are dominated by outdated concepts: change the oil every 3,000 miles, don’t wear white before Memorial Day, only senior management has the best ideas, don’t eat dessert until you’ve cleaned your plate, trade wars are easy to win, leeches work wonders on headaches, etc. Well, I’m going to throw down the gauntlet and challenge everyone to open their minds to the possibility of new ideas and new learning. That does not mean you should blindly believe, but instead, should invest the time to study, unlearn and learn new approaches and concepts. “You can’t climb a ladder if you’re not willing to let go of the rung below you.” As the new Chief Innovation Officer at Hitachi Vantara, leveraging ideation and innovation to derive and drive new sources of customer, product and operational value is more important than ever. So, Hitachi Vantara employees and customers, be prepared to change your frames; to challenge conventional thinking with respect to how we blend new concepts – AI / ML, Big Data, IOT – with tried and true ideas – Economics, Design Thinking – to create new sources of value.

How data governance and data management work together

Members of a data governance team
Although data governance provides a framework of controls for effective data management, it is just one component of the overall practice. Dan Everett, VP of product and solution marketing at Informatica, accurately described the relationship between data management and governance in a blog post. He said data governance must be implemented to be effective, while data management facilitates policy enforcement. Business size often determines how the data governance and data management responsibilities are organized and assigned. But size shouldn't be a determining factor for treating data as an enterprise asset, establishing effective data governance policies and performing high-quality data management. ... The initial data governance policies and data management procedures will most likely have gaps that lead to data quality issues. In addition, ensuring enterprise data is correct and used properly throughout the organization is fluid by nature. In other words, "things change." Data usage is highly dynamic and data governance controls and data management procedures may not always provide the guidance and best practices needed to guarantee data quality across all data stores. 

“Growing awareness around data privacy issues has compelled consumers to seek more control over their data and take some action to protect their privacy online. However, with over half of Brits saying they don’t know how to safeguard their online privacy, there is still a clear need for education on how people can keep themselves, and their data, safe online.” The extensive study found that 86% claimed to have taken at least one step to protect themselves online, such as clearing or disabling cookies, limiting what they share on social media platforms, and not using public Wi-Fi. Almost exactly the same proportion said they could still do more to protect themselves. In terms of what keeps consumers awake at night, NortonLifeLock found that 65% of Brits believe facial recognition technology will be misused and abused, and 42% believe it will do more harm than good – even though the majority also seem to support its use, with over 70% supporting its use by law enforcement.

What are deepfakes – and how can you spot them?

A comparison of an original and deepfake video of Facebook chief executive Mark Zuckerberg.
Deepfake technology can create convincing but entirely fictional photos from scratch. A non-existent Bloomberg journalist, “Maisy Kinsley”, who had a profile on LinkedIn and Twitter, was probably a deepfake. Another LinkedIn fake, “Katie Jones”, claimed to work at the Center for Strategic and International Studies, but is thought to be a deepfake created for a foreign spying operation. Audio can be deepfaked too, to create “voice skins” or ”voice clones” of public figures. Last March, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake, but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages. ... Poor-quality deepfakes are easier to spot. The lip synching might be bad, or the skin tone patchy. There can be flickering around the edges of transposed faces. And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe.

Spike in Remote Work Leads to 40% Increase in RDP Exposure to Hackers

As Covid-19 continues to wreak havoc globally, companies are keeping their employees at home. To ensure compliance and stay atop security standards, teleworkers have to patch into their company’s infrastructure using remote desktop protocol (RDP) and virtual private networks (VPN). But not everyone uses these solutions securely. Research by the folks behind Shodan, the search engine for Internet-connected devices, reveals that IT departments globally are exposing their organizations to risk as more companies go remote due to COVID-19. “The Remote Desktop Protocol (RDP) is a common way for Windows users to remotely manage their workstation or server. However, it has a history of security issues and generally shouldn’t be publicly accessible without any other protections (ex. firewall whitelist, 2FA),” writes Shodan creator John Matherly. After pulling new data regarding devices exposed via RDP and VPN, Matherly found that the number of devices exposing RDP to the Internet on standard ports jumped more than 40 percent over the past month to 3,389. In an attempt to foil hackers, IT administrators sometimes put an insecure service on a non-standard port (aka security by obscurity), Matherly notes.

Google’s CameraX Android API will let third-party apps use the best features of the stock camera

The benefit of using CameraX as a wrapper for the Camera2 API is that, internally, it resolves any device-specific compatibility issues that may arise. This alone will be useful for camera app developers since it can reduce boilerplate code and time spent researching camera problems. That’s not all that CameraX can do, though. While that first part is mostly only interesting to developers, there’s another part that applies to both developers and end users: Vendor Extensions. This is Google’s answer to the camera feature fragmentation on Android. Device manufacturers can opt to ship extension libraries with their phones that allow CameraX (and developers and users) to leverage native camera features. For example, say you really like Samsung’s Portrait Mode effect, but you don’t like the camera app itself. If Samsung decides to implement a CameraX Portrait Mode extension in its phones, any third-party app using CameraX will be able to use Samsung’s Portrait Mode. Obviously, this isn’t just confined to that one feature. Manufacturers can theoretically open up any of their camera features to apps using CameraX.

Personal details for the entire country of Georgia published online

Georgia flag
Personal information such as full names, home addresses, dates of birth, ID numbers, and mobile phone numbers were shared online in a 1.04 GB MDB (Microsoft Access database) file. The leaked data was spotted by the Under the Breach, a data breach monitoring and prevention service, and shared with ZDNet over the weekend. The database contained 4,934,863 records including details for millions of deceased citizens -- as can be seen from the screenshot below. Georgia's current population is estimated at 3.7 million, according to a 2019 census. It is unclear if the forum user who shared the data is the one who obtained it. The data's source also remains a mystery. On Sunday, ZDNet initially reported this leak over as coming from Georgia's Central Election Commission (CEC), but in a statement on Monday, the commission denied that the data originated from its servers, as it contained information that they don't usually collect.

AlphaFold Algorithm Predicts COVID-19 Protein Structures

AlphaFold is composed of three distinct layers of deep neural networks. The first layer is composed of a variational autoencoder stacked with an attention model, which generates realistic-looking fragments based on a single sequence’s amino acids. The second layer is split into two sublayers. The first sublayer optimizes inter-residue distances using a 1D CNN on a contact map, which is a 2D representation of amino acid residue distance by projecting the contact map onto a single dimension to input into the CNN. The second sublayer optimizes a scoring network, which is how much the generated substructures look like a protein using a 3D CNN. After regularizing, they add a third neural network layer that scores the generated protein against the actual model. The model conducted training on the Protein Data Bank, which is a freely accessible database that contains the three-dimensional structures for larger biological molecules such as proteins and nucleic acids.

Quote for the day:

"A leader knows what's best to do. A manager knows merely how best to do it." -- Ken Adelman

Daily Tech Digest - March 30, 2020

Cassandra and DataStax: Reunited, and it feels so good

Cassandra and DataStax: Reunited, and it feels so good
While single-vendor open source projects are somewhat common, they’re verboten for ASF projects. This became an issue for Cassandra, given that years ago DataStax may have contributed as much as 85 percent of the Cassandra code, by one estimate, while also running a community content forum (Planet Cassandra), Cassandra events, and more. This led to ASF accusations that DataStax exercised (or had the potential to exercise) undue influence over Cassandra. In response, DataStax pulled back, leaving the Cassandra community to fend for itself. This didn’t dissuade companies from continuing to bet big on Cassandra. Apple, for example, had long embraced the highly scalable, high-performance distributed database, as I wrote in 2015. While the company is famously cagey about sharing how it uses technology, we do know that it runs more than 100,000 Cassandra nodes today. With such a big investment in Cassandra, Apple couldn’t afford to let it fail, so Apple worked hard to ensure that stability dramatically improved from the Cassandra 3.11 release to today’s Cassandra 4.0 release.

Russia's Cybercrime Rule Reminder: Never Hack Russians

On Tuesday, Russia's Federal Security Service, known as the FSB, announced that together with Russia's Ministry of Internal Affairs, it had detained more than 30 individuals across 11 regions of the country - including Moscow, Crimea and St. Petersburg. Subsequently, authorities charged 25 of them with selling stolen credit and debit card that traced to Russian as well as foreign financial institutions. Authorities have accused the individuals, who include Russian, Ukrainian and Lithuanian citizens, of creating more than 90 online stores to sell stolen data, as well as using the stolen card data to purchase and resell more than $1 million worth of goods. Authorities say that when they searched suspects' residences, they also seized firearms, illegal drugs, gold bars, precious coins, as well as cash: $1 million in U.S. dollars as well as 3 million rubles (worth $39,000). The infrastructure being used by the alleged criminal enterprise has been shuttered, authorities say. The FSB said one of the individuals it arrested had previously been jailed for similar offenses.

DevOps loop
The First Way is to think about the performance of an entire system or process, rather than a specific silo or team. From the first line of code to successful deployment, IT departments must focus on the big picture, and emphasize larger organizational goals rather than smaller local ones. The Second Way focuses on feedback loops. A DevOps culture should accelerate and amplify feedback loops, enabling admins to identify and address any issues as quickly as possible. The Third Way fosters a culture of continual experimentation and learning, which requires IT teams to take risks and set aside time for innovation. In a DevOps culture, celebrate -- don't admonish -- rapid experimentation and rapid failure. It's this cycle of experimentation, failure and lessons learned that continually improves a DevOps practice over time. Naturally, DevOps will shake up the way any IT organization makes and measures progress. Encourage collaboration across department lines, and listen and take action on team feedback.

special report downturn economic by anueing gettyimages 606665834 3x2 2400x1600
Don't neglect the bread-and-butter stuff, either. As Senior Reporter Gregg Keizer explains in "How businesses can save money when everyone needs Office to work from home," you can cut costs substantially by switching to the right Office flavor. Gregg's advice may hold beyond the short term, as businesses discover that employees can work just as well at home as they do in an office. So we're paying for office space...why? Cost savings sometimes arrive in the form of needed functionality you weren't aware you already had. In "10 SD-WAN features you're probably not using, but should be," Network World contributor Neil Weinberg clues in SD-WAN customers: You may not know this, but zero-touch provisioning, application-aware routing, microsegmentation, and a bunch of other stuff may already be part of your SD-WAN solution. If you were planning on procuring any of those things separately, you don't have to. Recommendations like these will sound familiar to those who have endured previous downturns. Prioritize. Cut bait on bloated projects with uncertain return. Consider free stuff, even if it might not have every feature you want.

Ministry of Defence releases defence data management strategy

According to the report, the MoD sees more effective use of data, information and the systems that manage and process data as “vital enablers of both operational advantage and business transformation”. “New and emerging technologies can provide better capabilities to our operations and greater efficiency in our supporting functions, but success will require us to consider data differently,” it noted. “If we are to deliver improvements at speed and scale, then we must start with managing our data far more effectively than we do today,” the report added. A set of seven strategic objectives is outlined in the document. These goals relate to areas such as improvements of the availability and accessibility of defence data and implementation of data governance across the MoD, so the department can ensure the accountabilities and responsibilities for its data management. The document also outlines goals such as improving the quality and veracity of the data at the MoD, ensuring the integrity, confidentiality and security of data, and driving the consistent use of decision-making data across the department to improve coherency in the information produced from it.

Adventures in Graph Analytics Benchmarking

With all the attention graph analytics is getting lately, it’s increasingly important to measure its performance in a comprehensive, objective, and reproducible way. I covered this in another blog, in which I recommended using an off-the-shelf benchmark like the GAP Benchmark Suite* from the University of California, Berkeley. There are other graph benchmarks, of course, like LDBC Graphalytics*, but they can’t beat GAP for ease of use. There’s significant overlap between GAP and Graphalytics, but the latter is an industrial-strength benchmark that requires a special software configuration. Personally, I find benchmarking boring. But it’s unavoidable when I need performance data to make a decision. Obviously, the data has to be accurate. But it also has to be relevant to the decision at hand. It’s important that my use of a particular benchmark is aligned with its intended purpose. That’s why the performance comparison shown in Figure 1 had me scratching my head. It compares the PageRank performance of RAPIDS cuGraph* and HiBench*. The latter is a big data benchmark developed by some of my Intel colleagues to measure a wide range of analytics functions—not just PageRank.

This 5G smartphone comes with Android, Linux - and a keyboard.

London-based Planet Computers is on a mission to reinvent the iconic Psion Series 5 PDA for the smartphone age. Although mobile professionals -- especially those old enough to remember the 1997 Series 5 with affection -- are often open to the idea, the company's previous efforts, the Gemini PDA and Cosmo Communicator, have had their drawbacks. The Gemini PDA, for example, is a landscape-mode clamshell device that, despite a great keyboard, is difficult to make and take calls on and only has one camera -- a front-facing unit for video calling. The Cosmo Communicator adds a small external touch screen for notifications and some basic functions plus a rear-facing camera, but you still have to open the clamshell to do anything productive. The Astro Slide, announced today via a crowdfunding campaign on Indiegogo, has a new design with one large (6.53-inch) screen that slides open to reveal the keyboard, transforming the device from a portrait smartphone to a landscape PDA via a patented RockUp mechanism.

A Practical Guide to Data Obfuscation

The simplest way to obfuscate data is by masking out or redacting characters or digits with a fixed symbol. This is often used for credit card numbers where either the leading or the tailing digits are crossed out with an “X”. ... For more advanced anonymization, we need to look at functions that support something called differential privacy. The goal here is to apply statistical methods to modify content at a larger scope, like at the table level. Imagine, say, that you need to analyze customer data but require the birthday in order to group customers by demographics. Randomizing this piece of PII is not a good idea, as it would change the overall composition of data, often making it equally distributed across the possible value range. Instead, what is needed is a function that changes every birthday so the overall distribution stays nearly the same, but individuals are no longer identifiable. It may mean adding a few days or a few weeks to each date, but is a factor of the number of overall datasets. Query engines may offer the diff_privacy() function (or something with a similar name) for that purpose, allowing you to introduce uncertainty or jitter into your sensitive data so that the above requirement can be fulfilled.

9 offbeat databases worth a look
Many of DuckDB’s features are counterparts to what’s found in bigger OLAP products, even if smaller in scale. Data is stored as columns rather than rows, and query processing is vectorized to make the best use of CPU caching. You won’t find much in the way of native connectivity to reporting solutions like Tableau, but it shouldn’t be difficult to roll such a solution manually. Aside from bindings for C++, DuckDB also connects natively to two of the most common programming environments for analytics, Python and R. ... The goal behind HarperDB is to provide a single database for handling structured and unstructured data in an enterprise—somewhere between a multi-model database like FoundationDB and a data warehouse or OLAP solution. Ingested data is deduplicated and made available for queries through the interface of your choice: SQL, NoSQL, Excel, etc. BI solutions like Tableau or Power BI can integrate directly with HarperDB without the data needing to be extracted or processed. Both enterprise and community editions are available.

Slack redesigns app as Microsoft Teams hits 44 million users

The Slack redesign contains several elements that make the product look more like Teams. The top of the app now features a search bar and navigation buttons. Slack also added tabs for files and notifications, such as when a user tags someone in a message. Even more significant, Slack now lets paid users place channels within folders. For example, a user could put several channels in a "marketing team" folder. The setup is similar to how Teams groups channels -- except in Slack, each user gets to customize the layout. The inability to organize channels into groups had been a stumbling block for many Slack users, said Irwin Lazar, analyst at Nemertes Research. Slack should be able to get some companies to switch from free to paid plans with the introduction of folders as a premium service, he said. The redesign also lays the groundwork for Slack to introduce more real-time communications features. A newly reorganized sidebar within channels features a prominent phone icon that lets users begin a video call.

Quote for the day:

"Leadership offers an opportunity to make a difference in someone's life, no matter what the project." -- Bill Owens

Daily Tech Digest - March 29, 2020

Microsoft Patents New Cryptocurrency System Using Body Activity Data
Microsoft Technology Licensing, the licensing arm of Microsoft Corp., has been granted an international patent for a “cryptocurrency system using body activity data.” The patent was published by the World Intellectual Property Organization (WIPO) on March 26. The application was filed on June 20 last year. “Human body activity associated with a task provided to a user may be used in a mining process of a cryptocurrency system,” the patent reads, adding as an example: A brain wave or body heat emitted from the user when the user performs the task provided by an information or service provider, such as viewing advertisement or using certain internet services, can be used in the mining process. ... Different types of sensors can be used to “measure or sense body activity or scan human body,” the patent explains. They include “functional magnetic resonance imaging (fMRI) scanners or sensors, electroencephalography (EEG) sensors, near infrared spectroscopy (NIRS) sensors, heart rate monitors, thermal sensors, optical sensors, radio frequency (RF) sensors, ultrasonic sensors, cameras, or any other sensor or scanner” that will do the same job.

Is Samsung Quietly Becoming a Significant Player in the Cryptocurrency and Blockchain Industry?

It is thought that Samsung has created a processor that is dedicated to protecting the user’s PIN, pattern, password, and Blockchain Private Key with a combination of their security Knox platform. This ensures that security on their new S20 range is secure. Introducing their Blockchain Keystore last year it initially only supported ERC-20 token but added bitcoin in August of last year. Using Samsung devices with Blockchain Keystore means users can store their bitcoin and crypto wallet private keys on the device. One of the most critical issues that is overlooked is the control over a private wallet key and in most cases is the reason why most crypto thefts and hacks happen, because users fail to store their tokens in the wallets they have private keys for. This then means that if bitcoin or crypto are stored on smartphone wallets, it gives users control over their private keys and removes the control and reliance on external companies. The adoption of crypto has fallen short in recent years concerning its expectations. However, user experience developments have helped innovate technology to make using crypto more accessible.

Network of fake QR code generators will steal your Bitcoin

Bitcoin cryptocurrency
A network of Bitcoin-to-QR-code generators has stolen more than $45,000 from users in the past four weeks, ZDNet has learned. The nine websites provided users with the ability to enter their Bitcoin address, a long string of text where Bitcoin funds are stored, and convert it into a QR code image they could save on their PC or smartphone. Today, it's a common practice to share a Bitcoin address as a QR code and request a payment from another person. The receiver scans the QR code with a Bitcoin wallet app and sends the requested payment without having to type a lengthy Bitcoin addresses by hand. By using QR codes, users eliminate the possibility of a mistype that might send funds to the wrong wallet. Last week, Harry Denley, Director of Security at the MyCrypto platform, ran across a suspicious site that converted Bitcoin addresses into QR codes. While many services like this exist, Denley realized that the website was malicious in nature. Instead of converting an inputted Bitcoin (BTC) address into its QR code equivalent, the website always generated the same QR code -- for a scammer's wallet.

The 5G Economic Impact

The 5G Economic Impact
Despite its nascent status, the 5G ecosystem is already swimming in financial might. That same GSMA report predicts 5G technology will add $2.2 trillion to the global economy over the next 15 years. And operators are expected to spend more than $1 trillion on mobile capex between 2020 and 2025, with 80% of that spend directed at their 5G networks. While past technology evolutions primarily targeted the consumer market, the spend and return on 5G has a larger focus on the broader enterprise space. This includes connecting not just traditional enterprise workers and their respective mobile devices but connecting all electronic devices. This will involve a broader push toward edge deployments that can serve what are expected to be billions of connected and IoT devices. “With greater reliability and data speeds that will surpass those of 4G networks, a combination of 5G and local edge compute will pave the way for new business value,” ABI Research noted in a recent report, citing benefits gained from agility and process optimization; better and more efficient quality assurance and productivity improvement.

Adopting robotic process automation in Internal Audit

​With automation technologies advancing quickly and early adopters demonstrating their effectiveness, now is the time to understand and prioritize opportunities for Internal Audit robotic process automation. And to take important steps to prepare for thoughtful, progressive deployment. The age of automation is here, and with it comes opportunities for integrating Internal Audit (IA) robotic process automation (RPA) into the third line of defense (aka Internal Audit). IA departments, large and small, have already begun their journey into the world of automation by expanding their use of traditional analytics to include predictive models, RPA, and cognitive intelligence (CI). This is leading to quality enhancements, risk reductions, and time savings—not to mention increased risk intelligence. The automation spectrum, as we define it, comprises a broad range of digital technologies. As shown below, at one end are predictive models and tools for data integration and visualization. At the other end are advanced technologies with cognitive elements that mimic human behavior. Many IA organizations are familiar with the first part of the automation spectrum, having already established foundational data integration and analytics programs to enhance the risk assessment, audit fieldwork, and reporting processes.

A debate between AI experts shows a battle over the technology’s future

Why add classical AI to the mix? Well, we do all kinds of reasoning based on our knowledge in the world. Deep learning just doesn’t represent that. There’s no way in these systems to represent what a ball is or what a bottle is and what these things do to one another. So the results look great, but they’re typically not very generalizable. Classical AI—that’s its wheelhouse. It can, for example, parse a sentence to its semantic representation, or have knowledge about what’s going on in the world and then make inferences about that. It has its own problems: it usually doesn’t have enough coverage, because too much of it is hand-written and so forth. But at least in principle, it’s the only way we know to make systems that can do things like logical inference and inductive inference over abstract knowledge. It still doesn’t mean it’s absolutely right, but it’s by far the best that we have. And then there’s a lot of psychological evidence that people can do some level of symbolic representation.

Apache Flink in 10 Minutes

Apache Flink is an open-source stream processing framework. It is widely used by a lot of companies like Uber, ResearchGate, Zalando. At its core, it is all about the processing of stream data coming from external sources. It may operate with state-of-the-art messaging frameworks like Apache Kafka, Apache NiFi, Amazon Kinesis Streams, RabbitMQ. Let’s explore a simple Scala example of stream processing with Apache Flink. We'll ingest sensor data from Apache Kafka in JSON format, parse it, filter, calculate the distance that sensor has passed over the last 5 seconds, and send the processed data back to Kafka to a different topic. We'll need to get data from Kafka - we'll create a simple python-based Kafka producer. The code is in the appendix. ... Now we need a way to parse JSON string. As Scala has no inbuilt functionality for that, we'll use Play Framework. First, we need a case class to parse our json strings into. For simplicity, we will use automatic conversion from JSON strings to the JsonMessage. To transform elements in the stream we need to use .map transformation. The map transformation simply takes a single element as input and provides a single output. We'll also have to filter the elements that failed to parse.

Google Invents AI That Learns a Key Part of Chip Design

AI chip designing itself
“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv. “We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.” Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area. Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized.

This Simple WhatsApp Hack Will Hijack Your Account: Here’s What You Must Do Now

Photo Illustrations for Uber, Amazon, ISIS, Apple Health and more
The most obvious advice is NEVER to send a six-digit SMS to anyone for any reason. There have been other attacks covering other platforms using the same method. When a code is sent to your phone it relates to your phone. But there is a fix here that will protect your WhatsApp, even if the SMS code was sent onward. This fix will ensure you can’t fall victim to this crime. The code sent by SMS when you set up your WhatsApp account on a new phone comes directly from WhatsApp itself. The platform sets the code and sends it to you. But there is a totally separate setting in your own WhatsApp application that allows you to set your own six-digit PIN number. There is some confusion because these are both six-digit numbers—but they are entirely separate. Most people have still not set up this PIN number—the “Two-Step Verification” setting can be accessed under the Settings-Account from within the app. It takes less than a minute to set up. The PIN is for you to select, and even has the option of a backup email address. WhatsApp will ask you for the PIN when you change phones and also every so often when you’re using the app, that’s how secure it is.

How To Create Values & Ethics To AI In The Workplace?

The widespread uptake in this technology use comes at a time when more and more businesses are proactively addressing diversity and inclusivity among their workforce. Reports suggest that the US needs a curious, ethical AI workforce that works collaboratively to make reliable AI systems. In this way, members of AI development teams need to act over deep discussions regarding the implications of their work on the warfighters using them. In order to build AI systems effectively and ethically, defense organizations must encourage an ethical, inclusive work environment and procure a diverse workforce. This workforce should involve curiosity experts, a team of professionals who focus on human needs and behaviors, who are more likely to envision unsolicited and unintended consequences associated with the system’s use and mismanagement, and ask tough questions about those consequences. According to a research report, building cognitively diverse teams solve problems faster than teams of cognitively similar people. This also paves ways for innovation and creativity to flow, minimizing the risk of homogenous ideas coming to the fore.

Quote for the day:

"A leader is not an administrator who loves to run others, but someone who carries water for his people so that they can get on with their jobs." -- Robert Townsend

Daily Tech Digest - March 28, 2020

Coronavirus transforms peak internet usage into the new normal

"We've been watching the network very closely," said Joel Shadle, a spokesman for Comcast. "We're seeing a shift in peak usage. Instead of everyone coming home and getting online, we're seeing sustained usage and peaks during the day." AT&T reported Monday that on Friday and again on Sunday it hit record highs of data traffic between its network and its peers, driven by heavy video streaming. The company also said it saw all-time highs in data traffic from Netflix on Friday and Saturday with a slight dip on Sunday. And the company reported that its voice calling traffic has been way up, too. Wireless voice calls were up 44% compared to a normal Sunday; Wi-Fi calling was up 88% and landline home phone calls were up 74%, the company said in its press release Monday.  AT&T also said it has deployed FirstNet portable cell sites to boost coverage for first responders in parts of Indiana, Connecticut, New Jersey, California and New York. Cloudflare, which provides cloud-based networking and cybersecurity services and which has been tracking worldwide data usage, noted in a blog post last week that it had seen network usage increase as much as 40% in Seattle, where the coronavirus first broke out in the US.

The Ecommerce Surge: Guarding Against Fraud

As more consumers shift to online shopping during the COVID-19 pandemic, retailers must ramp up their efforts to guard against ecommerce payment fraud, says Toby McFarlane, a cybersecurity expert at CMSPI, a payments consultancy. "Retailers should have in place already tools to monitor fraud and approval rates" so they can be benchmarked, McFarlane says in an interview with Information Security Media Group. "If you see a spike in fraud, for example, you want to know if that's a general industry trend or if that is something specific to your business." The shift toward ecommerce in recent weeks presents opportunities to gain a competitive advantage, McFarlane says. "We've seen average transaction values are increasing online, so if merchants can ensure their online infrastructure and experience is set up to handle that, then we could see certain merchants taking market share from non-optimized merchants," he says.

How to refactor the God object antipattern

It's not good enough to simply write code that works. That code must be easily maintained, enhanced and debugged when problems happen. One of the reasons why object-oriented programming is so popular is because it delivers on these requirements. But antipatterns often appear when developers take shortcuts or focus more on the need to get things done instead of done right. One of those common antipatterns is the God object. One of the main concepts in object-oriented programming is that every component has a single purpose, and that component is only responsible for the properties and fields that allow it to perform its pertinent functions. ... Good object-oriented design sometimes takes a back seat to a need to get things done, and the single responsibility model gets thrown out the window. Then, out of nothingness, the God object emerges. In simple terms, the God object is a single Java file that violates the single-responsibility design pattern because it: performs multiple tasks; declares many unrelated properties; and maintains a collection of methods that have no logical relationship to one another, other than performing operations pivotal to the application function.

What’s Next in DevOps?

What’s Next in DevOps?
DevOps is aimed at "actualizing agile" by ensuring that teams have the technical capabilities to be truly agile, beyond just shortening their planning and work cadence. Importantly, DevOps also has Lean as part of its pedigree. This means that there is a focus on end-to-end lifecycle, flow optimisation, and thinking of improvement in terms of removing waste as opposed to just adding capacity. There are a huge number of organisations and teams that are still just taking their first steps in this process. For them, although the terminology and concepts may seem overwhelming at first, they benefit from a wide range of well-developed options to suit their development lifecycle needs. I anticipate that many software tools will be optimizing for ease-of-use, and continuing to compete on usability and the appearance of the UI. Whereas most early DevOps initiatives were strictly script and configuration file based, more recent offerings help to visualise processes and dependencies in a way that is easily digested by a broader segment of the organization.

Tips for cleaning data-center gear in response to coronavirus

server room / data center
Dell has come up with some guidance for cleaning its data center products. It's well timed, as data-center operators are tasked with implementing access and cleaning procedures in response to COVID-19. It's a real issue. The two biggest data center and colocation providers, Equinix and Digital Reality Trust, are restricting visitors to their facilities for the time being. Since the hardware in colocation data center is owned by the clients, they have every right to visit the facility to perform maintenance or upgrades – but not for now. Meanwhile, data-center staff have been declared essential and are exempt from California's "stay at home" order, so like grocery store and banking staff, data center workers can go to work. Right off the bat, Dell acknowledges that its data center products "are not high touch products," and that data centers should have a clean room policy where people are required to sanitize their hands before they enter. If your gear does need sterilization, Dell recommends engaging a professional cleaning company that specializes in sterilizing data center equipment. If that's not possible, then you can do it yourself as a last resort.

States of shock: Recovering our tech economy after COVID-19

Segal says the effects of the current economic downturn may be compounded by crises of confidence throughout the world, and reactions to the uncertain nature of the virus' transmissivity path — particularly in those countries where uncertainty preceded action. But that uncertainty, being a psychological factor, could be remedied in short order, giving her optimism that the global economy, including technology, could resume its previous course by the end of 2020. "We've certainly had at least a pause," remarked ZDNet contributor Ross Rubin, principal analyst with Reticle Research. He noted Apple's warning of supply chain disruptions for components for iPhone and other devices. As a supplier itself, it first closed its retail outlets inside China, and later as infection cases within China subsided, reopened those stores at roughly the same time it closed its retail outlets outside China. "The reports that we're getting back now is that the factories are starting to gear up again," Rubin continued. For example, Apple has announced product refreshes for iPad, still on schedule for May. "There seems to be some confidence there that, while those products do not ship in anywhere near the same volumes as iPhones — particularly the iPad Pro, which is a more premium product — they are introducing new, cellular-enabled products."

Aisera: The Next Generation For RPA (Robotic Process Automation)

Torso Of IT Manager Activating RPA Application
A good way to look at this is as a simple equation: AI + RPA = Conversational RPA. When you converge AI and RPA, you get Conversational RPA. AI provides a human-like dialogue interface for users providing similar consumer-like application experiences, like those of Alexa, Whatsapp, Instagram, and Snapchat. This simple natural human-like interface interacts and performs the duties, tasks, IT workflows, and business workflows. RPA is used to automate simple and complex workflows that are highly repetitive that are typical of back-office functions. Most of these should not require humans to manage, monitor or execute them. Conversational RPA’s self-learning ability reduces the barrier for user adoption and lends itself to expediting complex challenges like cloud and application integrations, compliance, audit trail creation, and user experience analysis that require complex workflows. Conversational RPA supports new workflows, existing workflows and provides a way to customize workflows to meet business needs.

Automate security testing and scans for DevSecOps success

Automated security testing analyzes environments to make sure they meet expectations. Organizations mandate particular environment configurations to meet security and performance goals, but you don't know that the configuration is as expected without testing. Processes like white box and black box testing can help QA engineers pinpoint potential vulnerabilities before it's too late. If configuration is out of specification, the software team can halt the release and remediate the security deficiencies themselves, or alert the security team. Remediation on the fly might be the better option if automation is in place, such as declarative configuration management, to handle configuration drift. If you have both red teams -- aggressive fake attackers -- and blue teams -- their counterparts enacting defenses -- in security, this is also the phase in which you should launch real attacks against your code. If the app can't handle it, it's time to go back to the drawing board with the developers to make the product more resilient. If the app passes, push to production with peace of mind.

Quantum entanglement breakthrough could boost encryption, secure communications

Generating photons at two micrometres had never been demonstrated before. A major challenge for the researchers was to get their hands on the appropriate technology to conduct their experiment. "You need detectors that are able to see single photons at two micrometres, and we had to develop the right technology for these measurements," says Clerici. "And on the other side, you also need a specific piece of technology to generate the photons." In partnership with technology manufacturer Covesion, Clerici and his team engineered a nonlinear crystal that was suitable for operating at two micrometers. Photons are generated when short pulses of light from a laser source pass through the crystal. In theory, the entangled photons generated at the new wavelength should be able to travel as far as the photons generated through existing methods, and used for satellite communication. But the new experiment is still in its early stages, and Clerici said that the team hasn't yet identified how much information the new technology can communicate, or how quickly.

Google's MediaPipe Machine Learning Framework Web-Enabled with WebAssembly

The browser-enabled version of MediaPipe graphs is implemented by compiling the C++ source code to WebAssembly using Emscripten, and creating an API for all necessary communications back and forth between JavaScript and C++. Required demo assets (ML models and auxiliary text/data files) are packaged as individual binary data packages, to be loaded at runtime. To optimize for performance, MediaPipe’s browser version leverages the GPU for image operations whenever possible, and resort to the lightest (yet accurate) available ML models. The XNNPack ML Inference Library is additionally used in connection with the TensorflowLite inference calculator (TfLiteInferenceCalculator), resulting in an estimated 2-3x speed gain in most of applications. Google plans to improve MediaPipe’s browser version and give developers more control over template graphs and assets used in the MediaPipe model files. Developers are invited to follow the Google Developer twitter account.

Quote for the day:

"Leadership is the other side of the coin of loneliness, and he who is a leader must always act alone. And acting alone, accept everything alone." -- Ferdinand Marcos

Daily Tech Digest - March 27, 2020

The Role Of Human Judgment As A Presumed Integral Ingredient For Achieving True AI

Human judgment is yet to be embodied into AI.
Some in AI would argue that human judgment is going to arise anyway within AI systems as a consequence of some form of “intelligence explosion” that might occur, and there’s no need to fret about how to code it or otherwise craft it by human hands. Essentially, some believe that if you make a large enough kind of Artificial Neural Network (ANN), oftentimes today referred to as Machine Learning or Deep Learning, there is going to be an arising emergence of true AI by the mere act of tossing together enough artificial neurons. One supposes that this is akin to an atomic explosion such that if you start by seeding a process and get it underway, there will be a chain reaction that becomes somewhat self-sustaining and grows iteratively. In the case of a large-scale (well, really, really, massively large-scale) computer-based neural network, such proponents presuppose that there would an emergence of intelligence in all respects of a human-like manner, and perhaps it would even exceed humans, becoming super-intelligent ... A few quick points to ground this discussion. The human brain has an estimated 86 billion neurons and perhaps a quadrillion synapses (for more on such estimates, see this link here). There is not yet any ANN that approaches that volume.

two colleagues having a business discussion  in front of a whiteboard
Mukherjee believes leaders need the ability to navigate the in-between places that experts avoid. He posits organizations should allocate leadership responsibilities across a network because leaders cannot be everywhere. Leadership today is distributed and takes place through teams. Given this, teams need access to key knowledge bases. As well, they need to be encouraged to bridge gaps in critical knowledge. According to James Staten, VP Disruptive Innovations at Forrester, "Our guidance is that leaders should not just form dedicated innovation teams, but they need to empower cross-company (and cross-ecosystem) innovation ideation so they have a broad set of ideas to choose from.” Mukherjee argues that digital transformation requires flat organizations. At the same time, he suggests it is important to ensure people understand their business's strategic intent. They need to “get to the higher ground versus go take the mountain.” Making this work involves acquiring team members who come up with solutions rather than just define problems. This starts by redesigning the work teams do. According to Jeanne Ross, it also involves creating an accountability framework.

Learn how New Relic works, and when to use it for IT monitoring

New Relic APM gathers metrics on web transactions, including response time on the web server side, throughput expressed in requests per minute and application errors over time, as well as metrics on individual HTTP requests. The tool also digs into the metrics of major database applications, such as SQL, to report response times and throughput, time per query, slow queries and other details that help pinpoint SQL statements that might bog down a website. New Relic APM supports Java and external environments. It can collect Java virtual machine (JVM) metrics, such as heap and non-heap memory, garbage collection, class count, thread pools, HTTP sessions and transactions. ... New Relic APM provides detailed error analytics that identify the exact error locations and classify the associated transactions and error types. Admins can filter results to tease out specific error details and attributes for each trace. A thread profiler shows the relative activity areas of the application to locate possible bottlenecks for remediation.

VueJS vs ReactJS: Which Will Reign in 2020?

reactjs vs vuejs
Well, both these are considered to be the best frameworks of Javascript. But still, they serve us with different features and functionalities. The basic difference between Vue.js and React is that Vue.js makes use of templates with declarative rendering, while on the other hand, React js uses JSX which is known as a JS extension which keeps HTML within it. This means that React needs more effort even for a simpler task in comparison to Vue. let’s have a look at the below-given images which are clearly showing the simple implementation of Vue and complex of React. ... Talking about the popularity of both these frameworks, it is already seen through some stats that Reactjs is more popular in comparison to Vuejs. It has been seen that javascript launches a number of frameworks and those frameworks keep on changing their positions in terms of popularity. According to the searches, React is on top with the number of 48,718 dependents, whereas, Vue.js is the second most popular JavaScript framework with half as many dependents — 21,575.

3 things I bet you didn’t know about multicloud security

3 things I bet you didn’t know about multicloud security
First, traditional approaches to security won’t work. Those of you who have had success in enterprises using traditional security approaches, such as role-based, won’t find the same results in multicloud. Multicloud requires that you deal with the complexity it brings and leverage security that’s able to configure around that complexity. IAM (identity access management) married with a good encryption system for both at rest and in flight are much better options. Second, you can’t use cloud-native security. Although the security that comes with AWS, Azure, and Google Cloud works great for the native platforms, they are not designed to secure a non-native or a competitor’s platform, for obvious reasons. Still, I run into enterprise users who use a cloud-native security platform as a centralized security manager and fail instantly. ... Finally, you’re responsible for more than you think. Public cloud providers put forth the shared-responsibility model as a way to help their cloud customers understand that although the providers do offer some rudimentary security, ultimately enterprise cloud users are responsible for their own security in the cloud. In a multicloud arrangement this is even more the case.

New attack on home routers sends users to spoofed sites that push malware

Photograph of a Linksys router.
It remains unclear how attackers are compromising the routers. The researchers, citing data collected from Bitdefender security products, suspect that the hackers are guessing passwords used to secure routers’ remote management console when that feature is turned on. Bitdefender also hypothesized that compromises may be carried out by guessing credentials for users’ Linksys cloud accounts. The router compromises allow attackers to designate the DNS servers connected devices use. DNS servers use the Internet domain name system to translate domain names into IP addresses so that computers can find the location of sites or servers users are trying to access. By sending devices to DNS servers that provide fraudulent lookups, attackers can redirect people to malicious sites that serve malware or attempt to phish passwords. The malicious DNS servers send targets to the domain they requested. Behind the scenes, however, the sites are spoofed, meaning they’re served from malicious IP addresses, rather than the legitimate IP address used by the domain owner.

Memory Issues For AI Edge Chips

AI chips — sometimes called deep-learning accelerators or processors — are optimized to handle various workloads in systems using machine learning. A subset of AI, machine learning utilizes a neural network to crunch data and identify patterns. It matches certain patterns and learns which of those attributes are important. These chips are targeted for a whole spectrum of compute applications, but there are distinct differences in those designs. For example, chips developed for the cloud typically are based on advanced processes, and they are expensive to design and manufacture. And edge devices, meanwhile, include chips developed for the automotive market, as well as drones, security cameras, smartphones, smart doorbells and voice assistants, according to The Linley Group. In this broad segment, each application has different requirements. For example, a smartphone chip is radically different than one created for a doorbell. For many edge products, the goal is to develop low-power devices with just enough compute power.

Visual Studio 2019: Now IntelliSense linter for C++ programming language cleans up code

The feature can be enabled in Visual Studio 2019 version 16.6 from the Preview Features within the Tools > Options menu. Microsoft developed the linter to make it easier developers to pick up C++ with a focus on finding and fixing logic and runtime errors in pre-build code. In future releases of the linter, Microsoft plans to let developers dial up or down the severity of individual checks and it will integrate it with other code-analysis tools. Microsoft has also released the third preview of the WebAssembly version of its Blazor renderer for building web apps that work offline. It follows last month's release of the second Mobile Blazor Bindings preview for building native iOS and Android apps using C# and .NET. This Blazor WebAssembly preview enables debugging in Visual Studio and Visual Studio Code, and automatic rebuilds in Visual Studio. It brings configuration updates as well as new HttpClient extension methods for JSON handling. Developers need to install Version 3.1.201 or later of the .NET Core SDK to use the latest Blazor WebAssembly preview, which Microsoft expects to reach general availability in May. Currently, the only Blazor renderer that has reached general availability is the Blazor Server remote renderer, while Microsoft has yet to fully commit to the future of Mobile Blazor Bindings.

Top 5 Machine Learning Algorithms You Need to Know

Top 5 ML algorithms you need to know
Logistic Regression is similar to linear regression, but is a binary classifier algorithm (it assigns a class to a given input, like saying an image of a pie is a "pie" or a "cake" or someone will come in 1st, 2nd, 3rd, 4th place) used to predict the probability of an event occurring given data. It works with binary data and is meant to predict a categorical "fit" (one being success and zero being failure, with probabilities in between), whereas Linear Regression's result could have infinite values and predict a value with a straight line. Logistic regression instead produces a logistic curve constrained to values between zero and one to examine the relationship between the variables ... Naive Bayes is a family of supervised classification algorithms that calculate conditional probabilities. They're based on Bayes’ Theorem which, assuming the presence of a particular feature in a class is independent of the presence of other features, finds a probability when other probabilities are known. For example, you could say a sphere is a tennis ball if it is yellow, small, and fuzzy.

Understanding Dynamics 365 for IT: Architecture, integration, and more

UIs for a suite of Microsoft Dynamics applications.
Central to Dynamics 365 is the Common Data Service (CDS) and its Common Data Model (CDM). This provides a foundation for data integration across all Dynamics 365 applications and services, your productivity and collaboration apps in Microsoft 365, your in-house systems, and even your SaaS applications in other clouds. The Common Data Service is a heterogeneous storage service for both structured tabular data and unstructured data such as images or log files. It runs in Microsoft Azure and is shared by Dynamics 365 applications, Microsoft 365, and the Microsoft Power Platform. The Common Data Service understands the shape of your data and the business logic over your data. The Common Data Model supports a consistent way of shaping and connecting your data, and we’ve open sourced the schemas we use in the Common Data Service which is the foundation of what we call the Common Data Model or CDM.

Quote for the day:

"Risks are the seeds from which successes grow." -- Gordon Tredgold

Daily Tech Digest - March 26, 2020

3 Ways Role-Based Access Control can Help Organizations

Device Control
RBAC is a policy-neutral access control solution built around roles and privileges. Also known as role-based security, RBAC helps restrict access to authorized users only. It supports both discretionary and mandatory access controls per business requirements. Its features including but not limited to permission groups, role permissions, and user-role or role-role relationships help block or restrict users from doing unauthorized actions or tasks or from using unauthorized data storage. Without an enforcing access control system, employees can do almost anything. For example, an employee can send a modified invoice or quote with his bank account information, stealing the payment from the organization’s clients. Or, he can provide access to third-party persons or organizations, allowing them to infiltrate in your organization, check or steal your sensitive data, and more. ... Wiith a role-based access control system, you can reduce the paperwork for onboarding employees, changing passwords, switching roles, etc. You can make use of the control system to add or switch roles quickly, implement roles and permissions to multiple employees or globally, and do more. Since the complete access control settings sit under one platform, it generates fewer errors and more efficiency when assigning roles and permissions to the employees.

Data-layer security is a new imperative as employees telecommute due to coronavirus

Cybersecurity began as an effort to wall off companies from the outside world, protecting trade secrets, customer data, and other sensitive information from unauthorized people. Since then, the world has grown far more complicated. Data has become increasingly important even as it has been moved to the "cloud," and accessed through the internet. No longer do just employees need access to that data--customers do, too. And no longer do just people need access to that data--other computer systems do, too. Corporate computer systems are no longer isolated forts, they are interconnected hives with information passing back and forth in myriad ways. The result has been a steady increase in ways for criminals to get that data, and a steady drumbeat of increasingly spectacular breaches, with criminals stealing everything from credit card and social security numbers to the blueprints for nuclear power plants. With virtual private networks that were built to handle modest numbers of workers now facing hordes, the threat vectors are proliferating.

Big Data: Leading trends in use, governance and technology

One of the benefits of using AI is that it can improve data quality. This improvement is needed within any analytics-driven organisation where the proliferation of personal, public, cloud, and on-premise data has made it nearly impossible for IT to keep up with user demand. Companies want to improve data quality by taking advanced design and visualisation concepts typically reserved for the final product of a BI solution, namely dashboards and reports, and putting them to work at the very beginning of the analytics lifecycle. AI-based data visualisation tools, such as Qlik’s Sense platform and Google Data Studio, are enabling enterprises to identify critical data sets which need attention for business decision-making, reducing human workloads. In an effort to speed time-to-market for custom-built AI tools, technology vendors are introducing pre-enriched, machine-readable data specific to given industries. Intended to help data scientists and AI engineers, these kits include the data necessary to create AI models that will speed the creation of those models. For example, the IBM Watson Data Kit for food menus includes 700,000 menus from across 21,000 US cities and dives into menu dynamics like price, cuisine, ingredients, etc.

Executives: employees are the greatest threat to critical cyber security image
The independent report, “Weathering the Perfect Storm: Securing the Cyber-Physical Systems of Critical Infrastructure,” queried over 400 c-level executives from critical infrastructure organisations across North America, Europe and Asia/Pacific and found: 52% say employees are the biggest threat to operational security; Cyber incursion into IT data systems accounted for 53% of attacks in the last 12 months; 85% of security incursions made their way into Operational Technology networks – of those, 36% started in IT/data systems and 32% involved physical incursion into OT; More than half (64%) say it took a cyber or physical security breach to motivate them to move toward a more holistic approach to cyber security; and Only a quarter believe their existing security is adequate.  “The perfect storm of increasing cyber threats, digital transformation and IT/OT convergence means organisations must move swiftly to gain visibility and enhance cybersecurity into their OT and IoT networks,” said Kim Legelis, CMO, Nozomi Networks.

10 ways hackers are using automation to boost their attacks

The simple reason cyber criminals are automating processes is because they see it as an avenue for more successful attacks and generating larger amounts of profit, more quickly and more efficiently. "Threat actors have realized that, even though in the short term it may seem that you can have a bigger windfall if you do everything from beginning to end, in the long run, if you focus on doing one thing very well, you will likely make more money," Roman Sannikov, director of cybercrime and underground intelligence at Recorded Future, told ZDNet. The 10 types of automated tool listed in the report aren't in any particular order, but researchers note that they're all extremely useful to cyber criminals looking to boost their illicit activity. ... Powerful tools that are widely available on the dark web, banking injects are modules that are typically bundled within banking trojans that inject HTML or JavaScript code into processes to redirect users from legitimate banking websites, to fake ones designed to steal details. While these tools are typically expensive – they can sell for four figures on underground forums – they provide users with an automated kit that they can use to make that figure back many times over and with little effort.

China-Based Threat Group Launches Widespread Malicious Campaign

Researchers from FireEye who have been tracking the activity said APT41 attacked as many as 75 of its customers between January 20 and March 11 alone. The targeted organizations are scattered across 20 countries, including the US, UK, Canada, Australia, France, Japan, and India. Organizations from nearly 20 sectors have been impacted, including those in the government, defense, banking, healthcare, pharmaceutical, and telecommunication sectors. Though only a handful of the attacks resulted in an actual security compromise, FireEye described APT41's activity as one of the broadest malicious campaigns ever by a Chinese threat actor in recent years. Chris Glyer, chief security architect at FireEye, says the reason for APT41's sudden burst of activity is unclear. Based on FireEye's current visibility, the attacks appear to be targeted, but it is hard to ascribe a specific motive or intent behind APT41's behavior, he says. But likely triggers include the ongoing trade war between the US and China and the unfolding COVID-19 pandemic.

Apple Update Fixes WebKit Flaws in iOS, Safari

“This vulnerability allows remote attackers to execute arbitrary code on affected installations of Apple Safari,” Dustin Childs, manager with Zero Day Initiative, told Threatpost. “The specific flaw exists within the object transition cache. By performing actions in JavaScript, an attacker can trigger a type confusion condition. An attacker can leverage this vulnerability to execute code in the context of the current process.” The issue “was addressed with improved memory handling,” according to Apple. Another type confusion issue (CVE-2020-3901) was found in WebKit, that could lead to arbitrary code execution. This flaw could be exploited if an attacker persuades a victim to process maliciously crafted web content, according to Apple. Apple also addressed a memory corruption issue (CVE-2020-3895, CVE-2020-3900), and a memory consumption issue (CVE-2020-3899) that could could enable attackers to launch code execution attacks. Finally, the tech giant also fixed an input validation bug in WebKit (CVE-2020-3902) that could allow attackers to launch a cross-site scripting attack. The attackers would need to first persuade victims to process maliciously crafted web content.

Organizations are moving their security to the cloud, but concerns remain

Cloud computing
Asked why they've been moving to cloud-based security, 29% of the respondents cited improvements in the monitoring and tracking of attacks, while 22% pointed to reduced maintenance. Other reasons included reductions in capital expenditures and access to the latest features. But organizations also have specific fears about switching their security tools to cloud-based variants. Asked about their concerns, 30% of the respondents pointed to the privacy of their data, 16% to unauthorized access, 14% to server outages, 14% to integration with other security tools, and 13% to the sovereignty of their data. Further, some 32% said they thought it would be too hard or too risky to migrate their security tools to the cloud. Another 32% said they didn't know what concerns their organization had about this type of migration. Among the organizations that have moved to cloud-based security tools, 22% cited email as the most widely protected type of data, 21% customer information, 20% file sharing, and 18% personnel files. Only 12% of the respondents said they're using cloud-based security to protect corporate financial data.

Edge Computing: 5 Design Considerations for Storage

istock 1129519394
Today’s challenges with data are heterogeneous. Data is scattered and unstructured in mixed storage and computing environments – endpoints, edge, on-premises, cloud, or a hybrid, which uses a mix of these. Data is also accessible across different architectures, including file-based, database, object, and containers. There are also issues of duplications and conflicts of data. 5G will surely add more complexity to today’s existing challenges. With 5G, even more data will be generated from endpoints and IoT devices, with more metadata and contextual data produced and consumed. As a result, there will be more demand for real-time processing and more edge compute processing, analyzing, and data storage scattered throughout the network. Each application and use case is unique and has different storage requirements and challenges, including performance, integrity of data, workloads, retention of data, and environmental restrictions. In the past, the capabilities of general-purpose storage greatly exceeded the requirements of networks, data, and applications.

GitOps brings the power of Git into Ops

GitOps brings the power of Git into Ops
Linus Torvalds might be best known as the creator of Linux, but Git, the distributed version control system of his invention, is arguably even more important. Torvalds has said that “Git proved I could be more than a one-hit wonder,” but this is an understatement in the extreme. While there were version control systems before Git (e.g., Subversion), Git has revolutionized how developers build software since its introduction in 2005. Today Git is a “near universal” ingredient of software development, according to studies pulled together by analyst Lawrence Hecht. How “near universal?” Well, Stack Overflow surveys put it at 87 percent in 2018, while JetBrains data has it jumping from 79 percent (2017) to 90 percent (2019) adoption. Because so much code sits in public and (even more in) private Git repositories, we’re in a fantastic position to wrap operations around Git. To quote Weaveworks CEO Alexis Richardson, “Git is the power option, [and] we would always recommend it if we could, but it is very wrong to say that GitOps requires expertise in Git. Using Git as the UI is not required. Git is the source of truth, not the UI.” Banks, for example, have old repositories sitting in Subversion or Mercurial. Can they do GitOps with these repositories?

Quote for the day:

"All organizations are perfectly designed to get the results they are now getting. If we want different results, we must change the way we do things." -- Tom Northup