Showing posts with label patent. Show all posts
Showing posts with label patent. Show all posts

Daily Tech Digest - September 04, 2021

AMD files teleportation patent to supercharge quantum computing

AMD has proposed a patent for 'teleportation,' meaning things could be about to get much more efficient around here. With the incredible technological feats humanity achieves on a daily basis, and Nvidia's Jensen going off on one last year about GeForce holodecks and time machines, it's easy for us to slip into a headspace that lets us believe genuine human teleportation is just around the corner. "Finally," you sigh, mouthing the headline to yourself. "Goodbye work commute, hello popping to Japan for an authentic Ramen on my lunch break." ... Essentially, the 'out-of-order' execution method AMD is looking to lay claim to ensures some Qubits that would be left idle—waiting for their calculation step to come around—are able to execute independent of a prior result. Where usually they would need to wait for previous Qubits to provide instructions, they can calculate simultaneously, no need to wait in line. So, no, we're not going to be zipping through wormholes just yet. But if AMD's designs come through, we could be looking at much more efficient, scalable and stable quantum computing architecture than we have now.


The Internet of Things Requires a Connected Data Infrastructure

Not long ago, a terabyte of information was an enormous amount and might be the foundation for solid decision-making. These days, it won’t cut it. For example, looking at a terabyte of data might yield a decision that’s 70% accurate. But leaving 30% to chance is unacceptable when it comes to real-time vehicle safety. On the other hand, having the ability to ingest and process 40 terabytes — from all sources, edge to core — can result in an accuracy rate well exceeding 90% accuracy. Something jumps in front of your car — is it a person, a dog, a trash bag, a child’s ball? Real-time systems need to determine the level of risk and react in micro milliseconds. Real-time processing has to be done closer to where the decisions are being made. In terms of IoT, a lot of questions can be answered by using a digital twin. These create additional layers of insights and provide a better understanding of what’s happening in any given situation and decide on the most appropriate course of immediate action. Digital twins take insight not just from the raw sensors — the edge compute nodes — but a combination of real-time data at the edge and historical data at the core.


Can Your Organization Benefit from Edge Data Centers?

Organizations considering a move to edge computing should begin their journey by inventorying their applications and infrastructure. It's also a good idea to assess current and future user requirements, focusing on where data is created and what actions need to be performed on that data. "Generally speaking, the more susceptible data is to latency, bandwidth, or security issues, the more likely the business is to benefit from edge capabilities," said Vipin Jain, CTO of edge computing startup Pensando. “Focus on a small number of pilot projects and partner with integrators/ISVs with experience in similar deployments." Fugate recommended examining business functions and processes and linking them to the application and infrastructure services they depend on. "This will ensure that there isn’t one key centralized service that could stop critical business functions," he said. "The idea is to determine what functions must survive regardless of an infrastructure or connectivity failure." Fugate also advised determining how to effectively manage and secure distributed edge platforms.

How to Speed Up Your Digital Transformation

The complexity-in-use is often overlooked in digitalization projects because those in charge think that accounting for task and system complexity independent of one another is enough. In our case, at the beginning of the transformation, tasks and processes were considered relatively stable and independent from the new system. As a result, the loan-editing clerks were unable to complete business-critical tasks for weeks, and management needed to completely reinvent their change management approach to turn the project around and overcome operational problems in the high complexity-in-use area. They brought in more people to reduce the backlog, developed new training materials, and even changed the newly implemented system — a problem-solving technique organizations with smaller budgets wouldn’t find easy to deploy. In the end, our study partner managed this herculean task, but it took them months to get the struggling departments back on track.


Ecosystems at The Edge: Where the Data Center Becomes a Marketplace

Rapidly evolving edge computing architectures are often seen as a way for businesses to enable new applications that require low latency and place computing close to the origin of data. While those are important use cases, what is less often discussed is the opportunity for businesses to leverage the edge to spawn ecosystems that generate new revenue. To realize this value, companies must think of the edge as more than just a collection point for data from intelligent devices. They should broaden their vision to see the edge as a new business hub. These small data centers can evolve into full-fledged service providers that attract local businesses, generate e-commerce transactions and enable interconnections that never touch the central cloud. Edge computing is an expansion of cloud infrastructure that moves data collection, processing and services closer to the point at which data is created or used. It is the fastest-growing segment of the cloud category with the total market expected to expand 37% annually through 2027, according to Grand View Research.


NSA: We 'don't know when or even if' a quantum computer will ever be able to break today's public-key encryption

In the NSA's summary, a CRQC – should one ever exist – "would be capable of undermining the widely deployed public key algorithms used for asymmetric key exchanges and digital signatures" – and what a relief it is that no one has one of these machines yet. The post-quantum encryption industry has long sought to portray itself as an immediate threat to today's encryption, as El Reg detailed in 2019. "The current widely used cryptography and hashing algorithms are based on certain mathematical calculations taking an impractical amount of time to solve," explained Martin Lee, a technical lead at Cisco's Talos infosec arm. "With the advent of quantum computers, we risk that these calculations will become easy to perform, and that our cryptographic software will no longer protect systems." Given that nations and labs are working toward building crypto-busting quantum computers, the NSA said it was working on "quantum-resistant public key" algorithms for private suppliers to the US government to use, having had its Post-Quantum Standardization Effort running since 2016. 

There are multiple ways that AI could become a detriment to society. Machine learning, a subfield of AI, learns from vast quantities of data and hence carries the risk of perpetuating data bias. AI use cases including facial recognition and predictive analytics could adversely impact protected classes in areas such as loan rejection, criminal justice and racial bias, leading to unfair outcomes for certain people. ... AI is only as good as the data that is used to train it. From an industry perspective, this is problematic given there is often a lack of training data for true failures in critical systems. This becomes dangerous when a wrong prediction leads to potentially life-threatening events such as manufacturing accidents or oil spills. This is why a focus on hybrid AI and “explainable AI” is necessary. ... Unfortunately, cybercriminals have historically been better and faster adopters of technology than the rest of us. AI can become a detriment to society when deepfakes and deep learning models are used as vehicles for social engineering by scammers to steal money, sensitive data and confidential intellectual property by pretending to be people and entities we trust.


Reviewing the Eight Fallacies of Distributed Computing

The challenges of distributed systems, and the broad science around the techniques and mechanisms used to build them, are now well researched. The thing you learn when addressing these challenges in the real world, however, is that academic understanding only gets you so far. Building distributed systems involves engineering pragmatism and trade-offs, and the best solutions are the ones you discover by experience and experiment. ... However, the engineering reality is that multiple kinds of failures can, and will, occur at the same time. The ideal solution now depends on the statistical distribution of failures; or on analysis of error budgets, and the specific service impact of certain errors. The recovery mechanisms can themselves fail due to system unreliability, and the probability of those failures might impact the solution. And of course, you have the dangers of complexity: solutions that are theoretically sound, but complex, might be far more complicated to manage or understand whenever an incident takes place than simpler mechanisms that are theoretically not as complete.


Machine Learning Algorithm Sidesteps the Scientific Method

We might be most familiar with machine learning algorithms as they are used in recommendation engines, and facial recognition and natural language processing applications. In the field of physics, however, machine learning algorithms are typically used to model complex processes like plasma disruptions in magnetic fusion devices, or modeling the dynamic motions of fluids. In the case of this work by the Princeton team, the algorithm skips the interim steps of needing to be explicitly programmed with the conventions of physics. “The algorithms developed are robust against variations of the governing laws of physics because the method does not require any knowledge of the laws of physics other than the fundamental assumption that the governing laws are field theories,” said the team. “When the effects of special relativity or general relativity are important, the algorithms are expected to be valid as well.” The researchers’ approach was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is actually a computer simulation.


What's the Real Difference Between Leadership and Management?

Leaders, like entrepreneurs, are constantly looking for ways to add to their world of expertise. They tend to enjoy reading, researching and connecting with like-minded individuals; they constantly aim to grow. They are usually open-minded and seek opportunities that challenge them to expand their level of thinking, which in turn leads to developing more solutions to problems that may arise. Managers, many times, rely on existing knowledge and skills by repeating proven strategies or behaviors that may have worked in the past to help maintain a steady track record within their field of success with clients. ... Leaders create trust and bonds between their mentees that go beyond expression or definition. Their mentees become raving fanatics willing to go above and beyond the usual scope of supporting their leader in achieving his or her mission. In the long run, the overwhelming support from his or her fanatics helps increase the value and credibility of the leader. On the other hand, managers direct, delegate, enforce and advise either an individual or group that typically represents a brand or organization looking for direction. Followers do as they are told and rarely ask questions. 



Quote for the day:

"Most people don't know how AWESOME they are, until you tell them. Be sure to tell them." -- Kelvin Ringold

Daily Tech Digest - August 08, 2021

The Role of Artificial Consciousness in AI Systems

What this means is that AI programs having common sense may not be enough to deal with un-encountered situations because it’s difficult to know the limits of common sense knowledge. It may be that artificial consciousness is the only way to ascribe meaning to the machine. Of course, artificial consciousness will be different to the human variant. Philosophers like Descartes, Daniel Dennett, and the physicist Roger Penrose and many others have given different theories of consciousness about how the brain produces thinking from neural activity. Neuroscience tools like fMRI scanners might lead to a better understanding of how this happens and enable a move to the next level of humanizing AI. But that would involve confronting what the Australian philosopher, David Chalmers, calls the hard problem of consciousness – how can subjectivity emerge from matter? Put another way, how can subjective experiences emerge from neuron activity in the brain? Furthermore, our understanding of human consciousness can only be understood through our own inner experience – the first-person perspective. 


Creating a Quality Strategy

Some teams might prefer to do ad-hoc exploratory testing with minimal documentation. Other teams might have elaborate test case management systems that document all the tests for the product. And there are many other options in between. Whatever you choose should be right for your team and right for your product. ... On some teams, the developers write the unit tests, and the testers write the API and UI tests. On other teams, the developers write the unit and API tests, and the testers create the UI tests. Even better is to have both the developers and the testers share the responsibility for creating and maintaining the API and UI tests. In this way, the developers can contribute their code management expertise, while the testers contribute their expertise in knowing what should be tested. ... Some larger companies may have dedicated security and performance engineers who take care of this testing. Small startups might have only one development team that needs to be in charge of everything.


It's time to improve Linux's security

Believe it or not, many vendors, especially in the Internet of Things (IoT), choose not to fix anything. Sure, they could do it. Several years ago, Linus Torvalds, Linux's creator, pointed out that "in theory, open-source [IoT devices] can be patched. In practice, vendors get in the way." Cook remarked, with malware here, botnets there, and state attackers everywhere, vendors certainly should protect their devices, but, all too often, they don't. "Unfortunately, this is the very common stance of vendors who see their devices as just a physical product instead of a hybrid product/service that must be regularly updated." Linux distributors, however, aren't as neglectful. They tend to "'cherry-pick only the 'important' fixes. But what constitutes 'important' or even relevant? Just determining whether to implement a fix takes developer time." It hasn't helped any that Linus Torvalds has sometimes made light of security issues. For example, in 2017, Torvalds dismissed some security developers' [as] "f-cking morons." He didn't mean to put all security developers in the same basket, but his colorful language set the tone for too many Linux developers.


Creating a Secure REST API in Node.js

As an open-source, Node.js is sponsored by Joyent, a cloud computing and Node.js best development provider. The firm financed several other technologies, like the Ruby on Rails framework, and implemented hosting duties to Twitter and LinkedIn. LinkedIn also became one of the first companies to use Node.js to create a new project for its mobile application backend. The technology was next selected by many technology administrators, like Uber, eBay, and Netflix. Though, it wasn’t until later that wide appropriation of server-side JavaScript with Node.js server began. The investment in this technology crested in 2017, and it is still trending on the top. Node.js IDEs, the most popular code editor, has assistance and plugins for JavaScript and Node.js, so it simply means how you customize IDE according to the coding requirements. But, many Node.js developers praise specific tools from VS Code, Brackets, and WebStorm. Exercising middleware over simple Node.js best development is a general method that makes developers’ lives more comfortable. 


In a world first, South Africa grants patent to an artificial intelligence system

At first glance, a recently granted South African patent relating to a “food container based on fractal geometry” seems fairly mundane. The innovation in question involves interlocking food containers that are easy for robots to grasp and stack. On closer inspection, the patent is anything but mundane. That’s because the inventor is not a human being – it is an artificial intelligence (AI) system called DABUS. ... The granting of the DABUS patent in South Africa has received widespread backlash from intellectual property experts. The critics argued that it was the incorrect decision in law, as AI lacks the necessary legal standing to qualify as an inventor. Many have argued that the grant was simply an oversight on the part of the commission, which has been known in the past to be less than reliable. Many also saw this as an indictment of South Africa’s patent procedures, which currently only consist of a formal examination step. This requires a check box sort of evaluation: ensuring that all the relevant forms have been submitted and are duly completed.


Ford's new BlueCruise hands-off driving feature is a solid first effort

It keeps the vehicle in the center of the lane, but with a little too much urgency. It's not a safety issue, but to a driver unfamiliar with what's going on, the steering movements are a little too frequent and a little too jerky. I can tell that the computer is working really hard to keep the car centered at all times — I compared it a 16-year old driver who was still learning the ropes and wasn't quite confident in their abilities, making frequent, jerky input adjustments as they drive along rather than smoother, more practiced inputs that an experienced driver would make. It isn't necessary to always be centered exactly in the lane, after all — an experienced driver knows that drifting a few inches to the left or right is normal. I said to the Ford engineers that most people probably wouldn't notice the tiny steering inputs, but they might lose confidence in the system because of it, even if they couldn't quite put their finger on why. Future releases will improve on it, I'm sure. BlueCruise also isn't (yet) aware of anything going on to the side or behind the vehicle.


Critical Cobalt Strike bug leaves botnet servers vulnerable to takedown

Cobalt Strike is a legitimate security tool used by penetration testers to emulate malicious activity in a network. Over the past few years, malicious hackers—working on behalf of a nation-state or in search of profit—have increasingly embraced the software. For both defender and attacker, Cobalt Strike provides a soup-to-nuts collection of software packages that allow infected computers and attacker servers to interact in highly customizable ways. The main components of the security tool are the Cobalt Strike client—also known as a Beacon—and the Cobalt Strike team server, which sends commands to infected computers and receives the data they exfiltrate. An attacker starts by spinning up a machine running Team Server that has been configured to use specific “malleability” customizations, such as how often the client is to report to the server or specific data to periodically send. Then the attacker installs the client on a targeted machine after exploiting a vulnerability, tricking the user or gaining access by other means.


Test Debt Fundamentals: What, Why & Warning Signs

Test Debt is hard to measure factually, but we can rely on our human capacity to detect, feel and react to warning signs. For test automation, we can sense organizational behaviors and specific test automation attributes. Let’s get back to the Why of our automated tests. One objective of our test automation effort is to accelerate the delivery of software changes with confidence. The test automation value disappears when the team starts to bypass the test automation campaign, search for alternative routes, ask for exceptions. Various reasons are possible as a long execution time, instability, lack of understanding, or other maintainability criteria. The execution time is directly tied to essential indicators of software delivery: lead-time for changes, cycle-time, and MTTA. These metrics are all part of the Accelerate report, correlating the organization’s performance with these measures. We need to constraint our test execution time to limit its impact on these acceleration metrics. For test automation, it means less but more valuable tests executed faster. 


Systems of systems: The next big step for edge AI

SoS will allow autonomous or semi-autonomous systems to control and respond to data flows. In the defense sector, for example, it will connect the data dots gathered from weather analysis, radars, and video surveillance to provide either the quickest path for a missile, or the best way to intercept it. Separately, a train technology provider that delivers transportation as a service need to unify the subsystems in a train and in a train station, expediting failure flagging and repairs to reduce costly service delays. In each case, a system of systems will inform or replace human decision-making, leading to faster, smarter, and more precise insights. ... It’s no stretch to say that edge AI-powered systems of systems will change society as we know it. Like bees working together to build and maintain a hive, algorithms in a SoS will form a swarm. Cars that can communicate with each other will be collectively smarter and safer than any individual car. Inside one vehicle, a SoS will coordinate navigation and telematics while independently gathering live weather and traffic data from roads.


Mainframes: The Missing Link To AI (Artificial Intelligence)?

The power of AI for mainframes does not have to be about creating projects. For example, there are emerging AIOps tools that help automate the systems. Some of the benefits include improved performance and availability, increased support speed for application releases and the DevOps process, and the proactive identification of issues. Such benefits can be essential since it is increasingly more difficult to attract qualified IT professionals. According to a recent survey from Forrester and BMC, about 81% of the respondents indicated that they rely partially on manual processes when dealing with slowdowns and 75% said they use manual labor for diagnosing multisystem incidents. In other words, there is much room for improvement—and AI can be a major driver for this. “Mainframe decision makers are becoming more aware than ever that the traditional way of handling mainframe operations will soon fall by the wayside,” said John McKenny, who is the Senior Vice President and General Manager of Intelligent Z Optimization and Transformation at BMC. 



Quote for the day:

"Ninety percent of leadership is the ability to communicate something people want." -- Dianne Feinstein

Daily Tech Digest - February 03, 2021

Usability Testing: the Ultimate Guide [Free Checklist]

Generally speaking, usability testing comes in two types: moderated and unmoderated. Moderated sessions are guided by a researcher or a designer, while the unmoderated ones rely on users’ own unassisted efforts. Moderated tests are an excellent choice if you want to observe users interact with prototypes in real-time. This approach is more goal-oriented — it lets you confirm or disconfirm existing hypotheses with more confidence. On the other hand, unmoderated usability tests are convenient when working with a substantial pool of subjects. A large number of participants allows you to identify a broader spectrum of issues and points of view. However, it’s important to underline that testing isn’t that black and white. It’s best to look at this practice as a spectrum between moderated and unmoderated testing. Sometimes, during unmoderated sessions, we like to nudge our subjects into the right direction through mild moderation when necessary. Testing our prototypes can provide us with a wide array of insights. Fundamentally, it helps us spot flaws in our designs and identify potential solutions to the issues we’ve uncovered. We learn about the parts of our product that confuse or frustrate our users. By disregarding this step, we’re opening up to the possibility of releasing a product that causes too much friction.


Linux malware backdoors supercomputers

ESET researchers have reverse engineered this small, yet complex malware that is portable to many operating systems including Linux, BSD, Solaris, and possibly AIX and Windows. “We have named this malware Kobalos for its tiny code size and many tricks; in Greek mythology, a kobalos is a small, mischievous creature,” explains Marc-Etienne Léveillé, who investigated the malware. “It has to be said that this level of sophistication is only rarely seen in Linux malware.” Kobalos is a backdoor containing broad commands that don’t reveal the intent of the attackers. It grants remote access to the file system, provides the ability to spawn terminal sessions, and allows proxying connections to other Kobalos-infected servers, Léveillé notes. Any server compromised by Kobalos can be turned into a Command & Control (C&C) server by the operators sending a single command. As the C&C server IP addresses and ports are hardcoded into the executable, the operators can then generate new Kobalos samples that use this new C&C server. In addition, in most systems compromised by Kobalos, the client for secure communication (SSH) is compromised to steal credentials.


Disrupting the patent ecosystem with blockchain and AI

Applying the power of AI and blockchain to IP assets enables a paradigm shift in how IP is understood and managed. Companies that understand and adopt this new paradigm will be rewarded. Last year, we announced the inclusion of IPwe — the world’s first AI and blockchain-powered patent platform, among our selection of the next wave of enterprise blockchain business networks. The Paris-based start-up has since deployed a suite of leading-edge IP solutions, removing barriers by addressing fundamental issues within today’s patent ecosystem. IPwe is partnering with IBM to accelerate its mission to address the inefficiencies in the patent marketplace. IBM Cloud and IBM Blockchain teams are working closely with IPwe on a multi-year project to assist IPwe in its mission to deliver world class solutions to its enterprise, SME, university, law firms, research institutions and government customers, with a heavy emphasis on meeting the needs of financial, technology and risk management executives. In addition to giving patent owners tools that provide greater visibility, effective management, and ease of conducting transactions with patents, the IPwe Platform reduces costs for innovators, and creates commercial opportunities for those that wish to partner or engage in financial transactions.


Low-Code Platforms and the Rise of the Community Developer: Lots of Solutions, or Lots of Problems?

Most community developers will progress through three stages as they become more capable of using the low-code platform. Many community developers won’t progress beyond the first or second stage but some will go onto the third stage and build full-featured applications used throughout your business. Stage 1—UI Generation: Initially they will create applications with nice user interfaces with data that is keyed into the application. For example, they may make a meeting notes application that allows users to jointly add meeting notes as a meeting progresses. This is the UI Generation stage. Stage 2—Integration: As users gain experience, they’ll move to the second stage where they start pulling in data from external systems and data sources. For example, they’ll enhance their meeting notes application to pull calendar information from Outlook and email attendees after each meeting with a copy of the notes. This is the Integration stage. Stage 3—Transformation: And, finally, they’ll start creating applications that perform increasingly sophisticated transformations. For example, they may run the meeting notes through a machine learning model to tag and store the meeting content so that it can be searched by topic. This is the Transformation stage.

XOps: Real or Hype?

Like DevOps, the various types of Ops aim to accelerate processes and improve the quality of what they're delivering: software (DevOps); data (DataOps); AI models (MLOps); and analytics insights (AIOps). Some consider the different Ops types important since the expertise required for each type differs. Others believe it's just hype, specifically relabeling what already exists and/or there's a risk that the fragmentation created by all the different groups may create extra bureaucracy that frustrates faster value delivery. Agile software development practices have been bubbling up to the business for some time. Since the dawn of the millennium, business leaders have been told their companies need to be more agile just to stay competitive. Meanwhile, many agile software development teams have adopted DevOps and increasingly they've gone a step further by embracing continuous integration/continuous delivery (CI/CD) which automates additional tasks to enable an end-to-end pipeline which provides visibility throughout and smoother process flows than the traditional waterfall handoffs. Like DevOps, DataOps, MLOps, and AIOps are cross-functional endeavors focused on continuous improvement, efficiency and process improvement.


Sigma Rules to Live Your Best SOC Life

In the Security Operations space, we have been using SIEM's for many years with varying degrees of deployments, customization, and effectiveness. For the most part, they have been a helpful tool for Security Operations. But they can be better. Like any tool, they need to be sharpened and used correctly. After a while, even a sharpened tool can become dull from too much use: and with a SIEM that takes the form of too many events creating the dreaded ALERT FATIGUE!!! This is real for security operations and must be addressed; because the more alerts, the more an engineer must work on, and the more they will miss. Insert Sigma Rules for SIEMS (pun intended); a way for Security Operations to implement standardization into the daily tasks of building SIEM queries, managing logs, and threat hunting correlations. What is a Sigma rule, you may ask? A Sigma rule is a generic and open, YAML-based signature format that enables a security operations team to describe relevant log events in a flexible and standardized format. So, what does that mean for security operations? Standardization and Collaboration are now more possible than ever before with the adoption of Sigma Rules throughout the Security Operations community. 


How AI Is Radically Changing Cancer Prediction & Diagnosis

Risk modelling includes assessing risks at different time points, which can determine the preventive measures that need to be taken at different stages. This can provide insight into the risk of developing cancer at a time point compared to the other, which is not useful. Hence, scientists trained Mirai to have an ‘additive hazard layer’. This layer can predict a patient’s risk at a time point, let’s say four years, as an extension of the risk at a previous time point, say three years, instead of comparing two different time points. This can help the model learn to make self-consistent risk assessments even with variable amounts of follow-ups as inputs. Secondly, the model includes non-image risk factors like age and hormonal variables but does not necessarily require them at the test time, since a trained network can extract this information from mammograms. Hence, this model can be adopted globally. Lastly, standard training models do not work even with minor variations, such as a change in the mammography machine used. Mirai used an ‘adversarial’ scheme, to de-bias such models to learn from mammogram representations agnostic to the source clinical environment.


How To Port Your Web App To Microsoft Teams

While there are many different paths to building and deploying Teams apps, one of the easiest is to integrate your existing web apps with Teams through what is called “tabs.” Tabs are basically embedded web apps created using HTML, TypeScript (or JavaScript), client-side frameworks such as React, or any server-side framework such as .NET. Tabs allow you to surface content in your app by essentially embedding a web page in Teams using <iframes>. The application was specifically designed with this capability in mind, so you can integrate existing web apps to create custom experiences for yourself, your team, and your app users. One useful feature about integrating your web apps with Teams is that you can pretty much use the developer tools you’re likely already familiar with: Git, Node.js, npm, and Visual Studio Code. To expand your apps with additional capabilities, you can use specialized tools such as the Teams Yeoman generator command line tool or Teams Toolkit Visual Studio Code extension and the Microsoft Teams JavaScript client SDK. They allow you to retrieve additional information and enhance the content you display in your Teams tab.


How AI Can Read Your Brain Waves

The music study is only one of many recent efforts to understand what people are thinking using computers. The research could lead to technology that one day would help people with disabilities manipulate objects using their minds. For example, Elon Musk’s Neuralink project aims to produce a neural implant that allows you to carry a computer wherever you go. Tiny threads are inserted into areas of the brain that control movement. Each thread contains many electrodes and is connected to an implanted computer. "The initial goal of our technology will be to help people with paralysis to regain independence through the control of computers and mobile devices," according to the project’s website. "Our devices are designed to give people the ability to communicate more easily via text or speech synthesis, to follow their curiosity on the web, or to express their creativity through photography, art, or writing apps." Brain-machine interfaces might even one day help make video games more realistic. Gabe Newell, the co-founder and president of video game giant Valve, said recently that his company is trying to connect human brains to computers. The company is working to develop open-source brain-computer interface software, he said. 


Q&A: Dataiku VP discusses AI deployment in financial services

AI is also a real revolution within risk assessment, notably through the enhanced use of alternative data. This is true both for traditional risks and emerging risks such as climate change, helping all financial players — banks and insurers alike — to reconsider how they price risks. Those who have developed a strong expertise in leveraging alternative data and agile modeling have been able to truly benefit from their investment during the ongoing health crisis, which has deeply challenged traditional models. Lastly, the positive impact of AI on customers should not be underestimated. Financial services are confronted with an aggressive competitive landscape as well as demand from customers for improved personalisation, driving improved customer orientation in these organisations. The capacity to build 360° customer views and optimise customer journeys, notably on claims management, are two examples of areas where AI has significantly supported deep transformation within banks and insurance companies, with yet much more to be delivered.



Quote for the day:

"Leadership is a potent combination of strategy and character. But if you must be without one, be without the strategy." -- Norman Schwarzkopf

Daily Tech Digest - November 29, 2020

Microservice Architecture: Why Containers And IoT Are A Perfect Fit

Apart from technical considerations, the way a software development team is set up also plays a critical role in the software technology decision. A major advantage of containerization is the great flexibility and manageability of the overall development process. Although previous monolithic software development often had cumbersome documentation requirements, hard-to-predict timelines and complicated synchronization processes, the container-based approach can deliver a different experience. If you're able to split up the project in isolated containers, you can divide the team into smaller groups with faster iterations and address additional feature requests more easily to cater to modern agile processes. Containerization also bridges the two worlds of cloud and embedded development by aligning the underlying technology, unifying the development workflow and leveraging automation capabilities containers provide. With that, it becomes much easier to support hybrid workflows and reuse the same software. This is important for IoT projects if customers have vastly different network environments, data ownership requirements and solution approaches.


Automation with intelligence

For organisations to successfully integrate intelligent automation, they must first acknowledge that transformation is necessary. It starts with making a conscious choice about what they want to achieve, based on the ‘art of the possible’. This decision is then fed into a robust and realistic intelligent automation strategy. That is the ideal, but here is the reality: Only 26 per cent of Deloitte’s survey respondents that are piloting automations – and 38 per cent of those implementing and scaling –have an enterprise-wide intelligent automation strategy. There is a clear difference between organisations piloting automations and those implementing and scaling their efforts. The latter are more likely to reimagine what they do and incorporate process change across functional boundaries. Those in the piloting stage are more likely to automate current processes, with limited change – they may have not yet taken advantage of the many technologies and techniques that can expand their field of vision and open up even more opportunities. There are other barriers to success: process fragmentation and a lack of IT readiness were ranked by survey respondents at the top of the list (consistent with responses in the past two years).


XDR: Unifying incident detection, response and remediation

The primary driver behind XDR is its fusing of analytics with detection and response. The premise is that these functions are not and should not be separate. By bringing them together, XDR promises to deliver many benefits. The first is a precise response to threats. Instead of keeping logs in a separate silo, with XDR they can be used to immediately drive response actions with higher fidelity and greater depth knowledge into the details surrounding an incident. For example, the traditional SIEM approach is based on monitoring network log data for threats and responding on the network. Unless a threat is simple, like commodity malware that can be easily cleaned up, remediation is typically delayed until a manual investigation is performed. XDR, on the other hand, provides SOCs both the visibility and ability to not just respond but also remediate. SOC operators can take precise rather than broad actions, and not just across the network, but also the endpoint and other areas. Because XDR seeks to fuse the analysis, control and response planes, it provides a unified view of threats. Instead of forcing SOCs to use multiple interfaces to threat hunt and investigate, event data and analytics are brought together in XDR to provide the full context needed to precisely respond to an incident.


The algorithms are watching us, but who is watching the algorithms?

Improving the process of data-based decisions in the public sector should be seen as a priority, according to the CDEI. "Democratically-elected governments bear special duties of accountability to citizens," reads the report. "We expect the public sector to be able to justify and evidence its decisions."  The stakes are high: earning the public's trust will be key to the successful deployment of AI. Yet the CDEI's report showed that up to 60% of citizens currently oppose the use of AI-infused decision-making in the criminal justice system. The vast majority of respondents (83%) are not even certain how such systems are used in the police forces in the first place, highlighting a gap in transparency that needs to be plugged. There is a lot that can be gained from AI systems if they are deployed appropriately. In fact, argued the CDEI's researchers, algorithms could be key to identifying historical human biases – and making sure they are removed from future decision-making tools. "Despite concerns about 'black box' algorithms, in some ways algorithms can be more transparent than human decisions," said the researchers. "Unlike a human, it is possible to reliably test how an algorithm responds to changes in parts of the input.


Microsoft patents tech to score meetings using body language, facial expressions, other data

Microsoft is facing criticism for its new “Productivity Score” technology, which can measure how much individual workers use email, chat and other digital tools. But it turns out the company has even bigger ideas for using technology to monitor workers in the interest of maximizing organizational productivity. Newly surfaced Microsoft patent filings describe a system for deriving and predicting “overall quality scores” for meetings using data such as body language, facial expressions, room temperature, time of day, and number of people in the meeting. The system uses cameras, sensors, and software tools to determine, for example, “how much a participant contributes to a meeting vs performing other tasks (e.g., texting, checking email, browsing the Internet).” The “meeting insight computing system” would then predict the likelihood that a group will hold a high-quality meeting. ... Microsoft says the goal is to help organizations ensure that their workers are taking advantage of tools like shared workspaces and cloud-based file sharing to work most efficiently. This also works to Microsoft’s advantage by encouraging the use of its products such as Teams and SharePoint inside companies, making future Microsoft 365 renewals more likely.


A family of computer scientists developed a blueprint for machine consciousness

Defining consciousness is only half the battle – and one that likely won’t be won until after we’ve aped it. The other side of of the equation is observing and measuring consciousness. We can watch a puppy react to stimulus. Even plant consciousness can be observed. But for a machine to demonstrate consciousness its observers have to be certain it isn’t merely imitating consciousness through clever mimicry. Let’s not forget that GPT-3 can blow even the most cynical of minds with its uncanny ability to seem cogent, coherent, and poignant. The Blums get around this problem by designing a system that’s only meant to demonstrate consciousness. It won’t try to act human or convince you it’s thinking. This isn’t an art project. Instead, it works a bit like a digital hourglass where each grain of sand is information. The machine sends and receives information in the form of “chunks” that contain simple pieces of information. There can be multiple chunks of information competing for mental bandwidth, but only one chunk of information is processed at a time. And, perhaps most importantly, there’s a delay in sending the next chunk. This allows chunks to compete – with the loudest, most important one often winning.


AI in Practice, With and Without Data

Data-based methods work well for situations where new data observed do not deviate too much from old data learned. In particular, data-intensive methods showed astonishing results in the domains of image, speech, and language understanding, and also in gaming. In fact, they are the quintessence implementation of what Economy Nobel Prize Daniel Kahneman refers to as System-1 in his theory about the mind. Based on this theory, the mind is composed of two systems: System-1 governs our perception and classification, and System-2 governs our reasoning and planning. ... To quote Daphne Koller, “the world is noisy and messy” and we need to deal with noise and uncertainty, even when data is available in quantity. Here, we enter the domain of probability theory and the best set of methods to consider is probabilistic graphical models where you model the subject under consideration. There are three kinds of probabilistic graphical models, from the least sophisticated to the most sophisticated: Bayesian networks, Markov networks, and hybrid networks. In these methods, you create a model that captures all the relevant general knowledge about the subject in quantitative, probabilistic terms, such as the cause-effect network of a troubleshooting application.


Organizations with a culture of innovation fuelling business resilience

The study introduced the culture of innovation framework, which spans the dimensions of people, process, data, and technology, to assess organizations’ approach to innovation. It surveyed 439 business decision makers and 438 workers in India within a 6-month period, before and since COVID-19. The India study was part of a broader survey among 3,312 business decision makers and 3,495 workers across 15 markets in Asia Pacific region. Through the research, organizations’ maturity was mapped and as a result, organizations were grouped in four stages – traditionalist (stage 1), novice (stage 2), adaptor (stage 3) and leaders (stage 4). Leaders comprise of organizations that are the most mature in building a culture of innovation1. “Innovation is no longer an option, but a necessity. We have seen how the recent crisis has spurred the need for transformation; for organizations to adapt and innovate in order to emerge stronger,” said Rajiv Sodhi, COO, Microsoft India. “We commissioned this research to gain better understanding of the relationship between having a culture of innovation and an organization’s growth. But now, more than achieving growth, we see that having a mature culture of innovation translates to resilience, and strength to withstand economic crises to recover,” he added.


Cyber Resilience During Times of Uncertainty

Current cybersecurity strategies tend to center around stopping potential threats from getting into your computing and communications infrastructure at all. To be successful, it requires that no employee ever click on a bad link, download the wrong file or work from an unsecured Wi-Fi network. However, this approach is not realistic nor sufficient enough in today’s world, and impossible in our collective future. That is why business leaders need to rethink their cyber strategy to adapt to our constantly changing world. In practice, the concept of cyber resilience is based on a bend-but-not-break philosophy. It understands that despite significant defensive investments and best efforts, cyber-criminals will occasionally get in. The cyber resilience approach is based on the premise that if you organize your defenses to prioritize resiliency over just computer security, you keep what’s most important going – your business. No matter what your business might be – whether it is churning out widgets or keeping the lights on – what’s key is to keep your most valuable assets unaffected and operational. Implementing this new goal, from the boardroom down, helps save money and improve results.


Governance Through Code for Microservices at Scale

Don’t go too far and take all decision making away from the development squads. It’s natural to think that the more we implement and the fewer the choices developers have to make the better. However, I’ve found that there is such a thing as going too far. On one of my projects the architecture team was making all of the decisions and creating frameworks/tools to govern through code. I still recall vividly one of the developers coming to me and saying “If you architects want to make all of the decisions and tell us how to implement things, then put your cell phone number in Pager Duty! If you want me to be accountable and be woken up at 3 AM when my code breaks in production, then I am going to make the decisions.” A decentralized governance approach was necessary and the role of the architect needed to be as a boundary setter. While the Product Build Squad is designing and building the Microservices Framework and the Developer Onboarding Tool (using an Inner Source approach with contributions from other developers), development squads are already using the framework and tool. Depending on how many development squads your project/program has, you could have many Microservices with say Version 1.0 of the Microservices Framework. 



Quote for the day:

"Success is often the result of taking a misstep in the right direction." - Al Bernstein

Daily Tech Digest - April 30, 2020

Why the Public Versus Private Blockchain Debate Is the Wrong Conversation

Public versus private blockchain
The conversation regarding public versus private blockchain doesn’t have to be a polarizing one. It’s not an either/or debate but rather a question of application. Private blockchains don’t have to be viewed as the enemy, or a replacement for public ones. They are simply a case-specific option. When taken out of the theoretical arena, there is room for both open read-and-write blockchains and those with access restrictions. What we find in practice, having developed numerous blockchain applications for both entrepreneurs and intrapreneurs, is that the apparently different requirements of each tend to converge over time. That is, many applications built by entrepreneurs will integrate with one or more large corporate enterprises at some point, and will therefore need to address their needs. Similarly, many enterprise applications are tackling obstacles that currently prevent them from making their solutions more open and capable of incorporating tokens of some form. Both sides are invested in the value of bringing integrity around data. 



It's because of the sudden change in working that 47% of those surveyed say they've found themselves reassigned to general IT tasks as organisations adapt to the new reality. In 90% of cases, the security team is working remotely full-time – the remaining 10% that are still going to an office are doing so either because their organisation is sensitive in nature and the work can't be done from home, or the company doesn't have the capability to allow full-time remote work. In many cases, these people would prefer to stay home, but as some respondents put it, "duty calls". In a significant number of cases, those duties involve dealing with a rise in the number of cyberattacks and other security incidents: overall 23% said the number of these had gone up since the transition to remote work and in some cases security teams are tracking double the number of incidents. Worryingly, 30% of those security professionals who've been reassigned to IT say there's been a rise in security incidents against their organisation, compared to 17% who haven't changed roles but say they're dealing with more attacks.


Shade Ransomware Operation Apparently Shuts Down

Shade Ransomware Operation Apparently Shuts Down
Jornt van der Wiel, another security researcher at Kaspersky, notes that even though the decryption keys are real, the true motive behind why the Shade operators decided to end their operations may never be known. "Keys can be stolen by a rival gang who put the message on Github, or it can be the real authors," van der Wiel tells Information Security Media Group. "We will never know until law enforcement agencies do some arrests." Those who say they are the operators of Shade, which is also known as Troldesh or Encoder.858, say in their GitHub post that they shut down their operations at the end of 2019 and that they were publishing their decryption keys, which can help security companies create their own tools to help remove the malware and recover any other crypto-locked files. "We are also publishing our decryption; we also hope that, having the keys, anti-virus companies will issue their own more user-friendly decryption tools. All other data related to our activity was irrevocably destroyed," according to the GitHub post. "We apologize to all the victims of the Trojan and hope that the keys we published will help them to recover their data."


Designing software to include older people in the digital world


“If you design for older people, you’re making inclusive choices for design and accessibility for everyone,” says Froso Ellina, product design manager at software development consultant VMware Pivotal Labs. On text, Ellina says that as well as using high colour contrasts and larger sizes, the choice of typography is important. A small number of simple fonts – with sans-serif ones such as Arial often the more accessible choice – can increase readability. Subtitling online videos means they can be used by those with poor hearing or no ability to hear, but also makes these work for those who are in a location where they can’t use audio. Older people can also find it harder to use touch screens due to declining motor skills. Ellina says that one centimetre is a good minimum length for a target area such as a button or link, and it makes sense to leave plenty of space between them. Short-term memory tends to decline with age, which has implications for how software is updated.


AI cannot be recognised as an inventor, US rules

The US Patent Office says that only humans are able to be inventors under the law.
The US Patent and Trademark Office rejected two patents where the AI system Dabus was listed as the inventor, in a ruling on Monday. US patent law had previously only specified eligible inventors had to be "individuals". ... Dabus designed: interlocking food containers that are easy for robots to grasp; and a warning light that flashes in a hard-to-ignore rhythm. And its creator, physicist and AI researcher Stephen Thaler, had argued that because he had not helped it with the inventions, it would be inaccurate to list himself as the inventor. But patents offices insist innovations are attributed to humans - to avoid legal complications that would arise if corporate inventorship were recognised. Some academics, however, have previously suggested this should no longer apply. The European Patent Office has seen a surge in AI-driven filings, according to Powell Gilbert LLP intellectual property law specialist Penny Gilbert. "AI is a fast-evolving field, set to revolutionise many industries, and raises many untested issues around patentability and ownership of inventions that are made using it," she told BBC News.


Reinforcement Machine Learning for Effective Clinical Trials


Machine Learning (ML) is often thought to be either Supervised (learning from labeled data) or Unsupervised (finding patterns in raw data). A less talked about area of ML is Reinforcement Learning (RL) – where we train an agent to learn by “observing” an environment rather than from a static dataset. RL is considered to be more of a true form of Artificial Intelligence (AI) – because it’s analogous to how we, as humans, learn new things – observing and learning by trial and error. ... A simpler abstraction of the RL problem is the Multi-armed bandit problem. A multi-armed bandit problem does not account for the environment and its state changes. As shown in figure 2 below, here the agent only observes the actions it takes and rewards it receives and tries to devise the optimal strategy. The idea in solving multi-armed bandit problems is to try and explore the action space and understand the distribution of the unknown rewards function. 


Get to know edge storage and the technology around it

Fog computing
Edge computing: Data is rarely static and often moves from where users are collecting and using it to the cloud or to a central data center for analysis, processing and storage. But data centers and clouds are often far from where the data is collected. Transmission takes time and inserts latency and inefficiencies into the processing equation. That's time that most organizations using IoT functionality just don't have. For instance, an autonomous vehicle can't wait for an answer on whether to swerve right or left; it needs a real-time response. Edge computing closes that data transmission distance and puts compute and storage closer to where the data is collected. This approach essentially decentralizes the traditional data center. Fog computing: Fog computing refers to a decentralized computing infrastructure in which data, applications, compute and storage sit between where the data originates and the cloud. Fog computing brings the cloud's intelligence, processing, compute and storage capabilities closer to the data for faster analysis and processing. Like edge computing, fog eliminates inefficiencies that come with data transmission and solves privacy and security issues inherent in data transmission.


Data governance matters now more than ever

Records Management is built into the Microsoft 365 productivity stack and existing customer workflows, easing the friction that often occurs between enforcing governance controls and user productivity. For example, say your team is working on a contract. Thanks to built-in retention policies embedded in the tools people use every day, they can continue to be productive while collaborating on a contract that has been declared a record—such as sharing, coauthoring, and accessing the record through mobile devices. We have also integrated our disposition process natively into the tools you use every day, including SharePoint and Outlook. Records versioning also makes collaboration on record-declared documents better, so you can track when edits are made to the contract. It allows users to unlock a document with a record label to make edits to it with all records safely retained and audit trails maintained. With Records Management, you can balance rigorous enforcement of data controls with allowing your organization to be fully productive.



Some of the reasons as to why senior executives in Australia are adopting AI is because 41% believe it frees up more time for employees to focus on more important tasks, another 40% see AI as a way to improve customer experience and service, and 39% agree AI offers businesses the ability to leverage data and analytics. Genpact Australia vice president and country manager Richard Morgan said the adoption of AI by Australian businesses signals that executives understand the potential benefits it could deliver. "I think AI is now a way to try to mine information and drive better outcomes for the company themselves, and to give clients a better experience to get them coming back and using your products and services more frequently -- that's the holy grail," he told ZDNet. Australian executives also believe that integrating AI into the talent process could help reduce gender bias in recruitment, hiring, and promotion, the study showed. On the other end of the spectrum, three-quarters of Australians said they are concerned about AI bias and another 67% fear that AI will make decisions that affect them without their knowledge.


Arming yourself against deepfake technology

Deepfakes are likely to continue causing havoc for politicians in the coming years, but equally, modern enterprises could also find themselves under threat. In 2019, the UK boss of an energy company was tricked over the phone when he was asked to transfer £200,000 to a Hungarian bank account by an individual using deepfake audio technology. The individual believed the call to be from his boss, but actually, the voice had been impersonated by a fraudster who succeeded in defrauding the man out of money. Occasions like this, particularly where there are substantial amounts of capital at risk, are reminders that organisations should be on high alert for deceptive fraudsters and arm themselves accordingly.  In sectors such as financial services, vast amounts of customer data are at risk and a breach of information or assets can have detrimental effects on all involved. When data is breached, both the consumer and organisation face potentially large consequences.



Quote for the day:


"When you find an idea that you just can't stop thinking about, that's probably a good one to pursue." -- Josh James


Daily Tech Digest - March 29, 2020

Microsoft Patents New Cryptocurrency System Using Body Activity Data
Microsoft Technology Licensing, the licensing arm of Microsoft Corp., has been granted an international patent for a “cryptocurrency system using body activity data.” The patent was published by the World Intellectual Property Organization (WIPO) on March 26. The application was filed on June 20 last year. “Human body activity associated with a task provided to a user may be used in a mining process of a cryptocurrency system,” the patent reads, adding as an example: A brain wave or body heat emitted from the user when the user performs the task provided by an information or service provider, such as viewing advertisement or using certain internet services, can be used in the mining process. ... Different types of sensors can be used to “measure or sense body activity or scan human body,” the patent explains. They include “functional magnetic resonance imaging (fMRI) scanners or sensors, electroencephalography (EEG) sensors, near infrared spectroscopy (NIRS) sensors, heart rate monitors, thermal sensors, optical sensors, radio frequency (RF) sensors, ultrasonic sensors, cameras, or any other sensor or scanner” that will do the same job.


Is Samsung Quietly Becoming a Significant Player in the Cryptocurrency and Blockchain Industry?


It is thought that Samsung has created a processor that is dedicated to protecting the user’s PIN, pattern, password, and Blockchain Private Key with a combination of their security Knox platform. This ensures that security on their new S20 range is secure. Introducing their Blockchain Keystore last year it initially only supported ERC-20 token but added bitcoin in August of last year. Using Samsung devices with Blockchain Keystore means users can store their bitcoin and crypto wallet private keys on the device. One of the most critical issues that is overlooked is the control over a private wallet key and in most cases is the reason why most crypto thefts and hacks happen, because users fail to store their tokens in the wallets they have private keys for. This then means that if bitcoin or crypto are stored on smartphone wallets, it gives users control over their private keys and removes the control and reliance on external companies. The adoption of crypto has fallen short in recent years concerning its expectations. However, user experience developments have helped innovate technology to make using crypto more accessible.



Network of fake QR code generators will steal your Bitcoin

Bitcoin cryptocurrency
A network of Bitcoin-to-QR-code generators has stolen more than $45,000 from users in the past four weeks, ZDNet has learned. The nine websites provided users with the ability to enter their Bitcoin address, a long string of text where Bitcoin funds are stored, and convert it into a QR code image they could save on their PC or smartphone. Today, it's a common practice to share a Bitcoin address as a QR code and request a payment from another person. The receiver scans the QR code with a Bitcoin wallet app and sends the requested payment without having to type a lengthy Bitcoin addresses by hand. By using QR codes, users eliminate the possibility of a mistype that might send funds to the wrong wallet. Last week, Harry Denley, Director of Security at the MyCrypto platform, ran across a suspicious site that converted Bitcoin addresses into QR codes. While many services like this exist, Denley realized that the website was malicious in nature. Instead of converting an inputted Bitcoin (BTC) address into its QR code equivalent, the website always generated the same QR code -- for a scammer's wallet.


The 5G Economic Impact

The 5G Economic Impact
Despite its nascent status, the 5G ecosystem is already swimming in financial might. That same GSMA report predicts 5G technology will add $2.2 trillion to the global economy over the next 15 years. And operators are expected to spend more than $1 trillion on mobile capex between 2020 and 2025, with 80% of that spend directed at their 5G networks. While past technology evolutions primarily targeted the consumer market, the spend and return on 5G has a larger focus on the broader enterprise space. This includes connecting not just traditional enterprise workers and their respective mobile devices but connecting all electronic devices. This will involve a broader push toward edge deployments that can serve what are expected to be billions of connected and IoT devices. “With greater reliability and data speeds that will surpass those of 4G networks, a combination of 5G and local edge compute will pave the way for new business value,” ABI Research noted in a recent report, citing benefits gained from agility and process optimization; better and more efficient quality assurance and productivity improvement.


Adopting robotic process automation in Internal Audit


​With automation technologies advancing quickly and early adopters demonstrating their effectiveness, now is the time to understand and prioritize opportunities for Internal Audit robotic process automation. And to take important steps to prepare for thoughtful, progressive deployment. The age of automation is here, and with it comes opportunities for integrating Internal Audit (IA) robotic process automation (RPA) into the third line of defense (aka Internal Audit). IA departments, large and small, have already begun their journey into the world of automation by expanding their use of traditional analytics to include predictive models, RPA, and cognitive intelligence (CI). This is leading to quality enhancements, risk reductions, and time savings—not to mention increased risk intelligence. The automation spectrum, as we define it, comprises a broad range of digital technologies. As shown below, at one end are predictive models and tools for data integration and visualization. At the other end are advanced technologies with cognitive elements that mimic human behavior. Many IA organizations are familiar with the first part of the automation spectrum, having already established foundational data integration and analytics programs to enhance the risk assessment, audit fieldwork, and reporting processes.


A debate between AI experts shows a battle over the technology’s future

Why add classical AI to the mix? Well, we do all kinds of reasoning based on our knowledge in the world. Deep learning just doesn’t represent that. There’s no way in these systems to represent what a ball is or what a bottle is and what these things do to one another. So the results look great, but they’re typically not very generalizable. Classical AI—that’s its wheelhouse. It can, for example, parse a sentence to its semantic representation, or have knowledge about what’s going on in the world and then make inferences about that. It has its own problems: it usually doesn’t have enough coverage, because too much of it is hand-written and so forth. But at least in principle, it’s the only way we know to make systems that can do things like logical inference and inductive inference over abstract knowledge. It still doesn’t mean it’s absolutely right, but it’s by far the best that we have. And then there’s a lot of psychological evidence that people can do some level of symbolic representation.


Apache Flink in 10 Minutes


Apache Flink is an open-source stream processing framework. It is widely used by a lot of companies like Uber, ResearchGate, Zalando. At its core, it is all about the processing of stream data coming from external sources. It may operate with state-of-the-art messaging frameworks like Apache Kafka, Apache NiFi, Amazon Kinesis Streams, RabbitMQ. Let’s explore a simple Scala example of stream processing with Apache Flink. We'll ingest sensor data from Apache Kafka in JSON format, parse it, filter, calculate the distance that sensor has passed over the last 5 seconds, and send the processed data back to Kafka to a different topic. We'll need to get data from Kafka - we'll create a simple python-based Kafka producer. The code is in the appendix. ... Now we need a way to parse JSON string. As Scala has no inbuilt functionality for that, we'll use Play Framework. First, we need a case class to parse our json strings into. For simplicity, we will use automatic conversion from JSON strings to the JsonMessage. To transform elements in the stream we need to use .map transformation. The map transformation simply takes a single element as input and provides a single output. We'll also have to filter the elements that failed to parse.


Google Invents AI That Learns a Key Part of Chip Design

AI chip designing itself
“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv. “We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.” Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area. Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized.


This Simple WhatsApp Hack Will Hijack Your Account: Here’s What You Must Do Now

Photo Illustrations for Uber, Amazon, ISIS, Apple Health and more
The most obvious advice is NEVER to send a six-digit SMS to anyone for any reason. There have been other attacks covering other platforms using the same method. When a code is sent to your phone it relates to your phone. But there is a fix here that will protect your WhatsApp, even if the SMS code was sent onward. This fix will ensure you can’t fall victim to this crime. The code sent by SMS when you set up your WhatsApp account on a new phone comes directly from WhatsApp itself. The platform sets the code and sends it to you. But there is a totally separate setting in your own WhatsApp application that allows you to set your own six-digit PIN number. There is some confusion because these are both six-digit numbers—but they are entirely separate. Most people have still not set up this PIN number—the “Two-Step Verification” setting can be accessed under the Settings-Account from within the app. It takes less than a minute to set up. The PIN is for you to select, and even has the option of a backup email address. WhatsApp will ask you for the PIN when you change phones and also every so often when you’re using the app, that’s how secure it is.


How To Create Values & Ethics To AI In The Workplace?

AI
The widespread uptake in this technology use comes at a time when more and more businesses are proactively addressing diversity and inclusivity among their workforce. Reports suggest that the US needs a curious, ethical AI workforce that works collaboratively to make reliable AI systems. In this way, members of AI development teams need to act over deep discussions regarding the implications of their work on the warfighters using them. In order to build AI systems effectively and ethically, defense organizations must encourage an ethical, inclusive work environment and procure a diverse workforce. This workforce should involve curiosity experts, a team of professionals who focus on human needs and behaviors, who are more likely to envision unsolicited and unintended consequences associated with the system’s use and mismanagement, and ask tough questions about those consequences. According to a research report, building cognitively diverse teams solve problems faster than teams of cognitively similar people. This also paves ways for innovation and creativity to flow, minimizing the risk of homogenous ideas coming to the fore.



Quote for the day:


"A leader is not an administrator who loves to run others, but someone who carries water for his people so that they can get on with their jobs." -- Robert Townsend