Daily Tech Digest - September 06, 2018

IBM researchers build AI-powered prototype to help small farmers test soil

2-agropad-photo.jpg
The AgroPad is a paper device about the size of a business card. It has a microfluidics chip inside that can perform a chemical analysis of a water or soil sample in less than 10 seconds. A farmer simply puts his sample on one side of the card, and on the other side, a set of circles provides colorimetric test results. Using a dedicated smartphone app, the farmer can receive immediate, precise results. The app uses machine vision to translate the color composition and intensity into chemical concentrations, with results more reliable than those that rely on human vision. The current prototype measures pH, nitrogen dioxide, aluminum, magnesium and chlorine, though the research team is working on extending the library of chemical indicators. AgroPads could be personalized based on the needs of the individual farmer. Once the test results are in, the data can be streamed to the cloud and labeled with a digital tag to mark the time and location of the analysis. Results for millions of individual tests could be stored.



Microchip 'god mode' flaw: Is it time to rethink security?


This particular vulnerability might be described as a type of hardware backdoor, in which undocumented CPU instructions can take a process from an operating system's Ring 3, the least privileged level of access to resources, directly to Ring 0, the most privileged level of access to resources. Ring 3 is where applications run, and keeping them there keeps them from tinkering with the data or code that other applications use. Ring 0 is reserved for the operating system itself, which manages the resources that all running processes can access. An application needs to be in Ring 0 to enable this backdoor, but Domas found that some systems seem to have been shipped with the backdoor already enabled. Software running in Ring 0 can potentially bypass any security mechanism of other processes. If a process uses a password or cryptographic key, another process running in Ring 0 may also be able to get that password or key, thus virtually eliminating the security it provides.


How blockchain technology could aid key data challenges


The current model for storing data is by keeping said data stored in one place. For example, a Microsoft Word document is saved to a desktop. While access to that document may be made through the server, even remotely, it is still saved in a single, centralized location. Blockchain is, arguably, the exact opposite. Data is stored as the “block” of the technology. The blockchain, in its entirety, is an encrypted ledger that is replicated throughout the database. The data (block) is decentralized through this replication. In other words, it is not saved to a single place, but instead exists across all blocks in the network. Even though the ledger exists in a public space, a private key is required to access a specific block—it enables data to be distributed, but not copied. This manner of data storage protects information from ransomware and hacking attacks by requiring a hacker to simultaneously breach and affect every block in order to render any damage, as opposed to corrupting or stealing just one document in the current centralized version of data storage.


Is a developer career right for you? 10 questions to ask yourself

Developers are among the most in-demand tech professionals in the workforce, with high salaries offered to those with the right skill sets. While learning to code and breaking into a new career may seem daunting, the high number of open jobs and training opportunities could make development a great option for many people. "A lot of developers suffer with imposter syndrome and feeling like they don't have enough knowledge or experience to apply for a developer position," said Cristina Blanchard, a front end web developer at Brew Agency. "The truth is, if you have a solid knowledge and understanding of the most basic, core concepts of development, you can learn just about anything with the right training and a little tenacity. Don't be afraid to apply for a position you feel you may be under-qualified for, because you never know who may be willing to train you or help you get the experience you need."


Government projects watchdog recommends terminating Gov.uk Verify identity project


Sources suggest that GDS hopes to make a case to continue with Verify. Just this week, it announced three further digital services using Verify had reached the “private beta” testing stage, although none of the services have a launch date. ... GDS is also understood to be making a case that Verify remains essential to the ongoing roll-out of Universal Credit, the government’s new benefits system. But even there, the Department for Work and Pensions has had to develop an additional identity system after finding that hundreds of thousands of benefits applicants could be unable to register successfully on Verify. ... There are also question marks over the commitment of the IDPs – also known as certified companies. A report by McKinsey for the Cabinet Office showed that more than 80% of users chose two of the seven IDPs – Experian and the Post Office – leaving Digidentity, Royal Mail, Barclays, Citizen Safe and Secure Identity to pick up the remainder between them.


Designing a Usable, Flexible, Long-Lasting API

Most APIs aren't truly REST APIs, so if you choose to build a RESTful API, do you understand the constraints of REST including hypermedia/HATEOAS? If you choose to build a partial REST or REST-like API, do you know why you are choosing to not follow certain constraints of REST? Depending on what your API needs to be able to do and where your API will be used, legacy formats such as SOAP may make sense. However, with each format comes a tradeoff in terms of usability, flexibility, and development costs. Finally, as we start to plan our API, it's important to understand how our users will interact with the API and how they'll use it in conjunction with other services. Be sure to use tools like RAML or Swagger/OAI during this process to involve your users, provide mock APIs for them to interact with, and to ensure your design is consistent and meets their needs. As you design your API, it's also important to remember that you're laying a foundation to build upon at a later time.


How to Cultivate Security Champions at the Workplace

How to Cultivate Security Champions at the Workplace
Some things you consider simple are things that can make a big impact on people. Think even smaller, visiting with people one-on-one as time and events present themselves. Last note, there is no better time than an incident debrief to educate users one-on-one or in a group. The point is to get people’s attention. Show them why security is important. Show how easy it really can be for malicious actors to reign havoc in your environment. Show how they can have a direct impact in helping to prevent that. A few people will take it to heart and develop a security mindsight. Many people in information security are problem solvers. Approach it that way. Demonstrate to them how a malicious actor could easily attack your AD / Kerberos infrastructure. See how many ask what can be done to mitigate it. Instead of answering, ask them what they would do, what they can think of. Make it a problem for them to solve. Just keep your audience in mind. What will entice one audience, say demonstrating the intricacies of Kerberoasting to your server administrators, will be lost on business partners.


Cloud computing: Three reasons why it could be time to go cloud-first

"There's data governance questions and there might be clients that don't allow us to pass their data to the cloud -- and that's why there might be a situation why we can't go on demand. But our default option will always be to take the cloud option first if we can. And that approach will be multi-cloud where we'll use a range of providers." Kay says he doesn't believe the cloud is a one-size-fits-all situation right now. He says there's still a bit of an arms race taking place and that different providers have different strengths. "Some are better at doing things better than others. And we, therefore, want to be able to take advantage of those capabilities," says Kay. "So, we will not be dogmatic and push everything to a single provider. We're trying all of them -- at the SaaS level, we're using Salesforce, Microsoft Dynamics 365 and Workday. When it comes to IaaS and PaaS, we're using AWS, Azure and Google. That's a deliberate strategy. We have a view on which provider is stronger for a particular set of characteristics."


Four Ways to Take Charge in Your First Agile Project


Creating an environment of psychological safety is imperative for a high-performing team. Google conducted a two-year long study on team collaboration and found that when individuals felt that they could share their honest opinions without fear of backlash, they performed far better. When employees feel that their opinions matter, their engagement levels peak and, according to Gallup’s research, their productivity increases by an average of 12%. But unfortunately, this doesn’t always happen, especially when teams are new to Agile and Scrum. The Agile Report found that one of the top challenges reported while adopting an Agile approach was its alignment to cultural philosophy, along with lack of leadership support and troubles with collaboration. All of these issues are related to people’s personalities, including their strong points and areas of weakness. If a strong leader is put in place and a strategy is designed to work to each member’s strength, many of these problems can diminish.


Ransomware Recovery: Don't Make Matters Worse

Ransomware Recovery: Don't Make Matters Worse
"Trying to decide whether to pay ransom or not is never an easy decision - the best answer is 'no, never' but that's not always a decision you can make," says former healthcare CIO David Finn, executive vice president of the security consultancy CynergisTek. "You will rarely negotiate from a position of strength with a hacker - or any criminal, for that matter. Having a well thought out plan would have helped, and certainly being able to restore the data yourself, without 'buying' decryption might have avoided the entire nasty event." An organization that chooses to pay attackers to unlock data "should apply the decryption key itself with whatever instruction the criminals can provide - instead of sending a file to the ransomware perpetrators to decrypt as evidence the key works," suggests Keith Fricke, principal consultant at tw-Security. "If possible, it is a good practice to have a third-party vendor pay for the decryption key on behalf of the [organization]," he adds.



Quote for the day:


"Give whatever you are doing and whoever you are with the gift of your attention." -- Jim Rohn


Daily Tech Digest - September 05, 2018

Binary randomization makes large-scale vulnerability exploitation nearly impossible
To date, critical infrastructure cybersecurity has relied too much upon network monitoring and anomaly detection in an attempt to detect suspicious traffic before it turns problematic. The challenge with this approach is that it is reactionary and only effective after an adversary has breached some level of defenses. We take an entirely different approach, focusing on prevention by denying malware the uniformity it needs to propagate. To do this, we use a binary randomization technique that shuffles the basic constructs of a program, known as basic blocks, to produce code that is functionally identical, but logically unique. When an attacker develops an exploit for a known vulnerability in a program, it is helpful to know where all the code is located so that they can repurpose it to do their bidding. Binary randomization renders that prior knowledge useless, as each instance of a program has code in different locations. One way to visualize the concept of binary randomization is to picture the Star Wars universe at the time when Luke Skywalker and the Rebel Alliance set off to destroy the Death Star. 


How to eliminate project noise

istock-women-with-bullhorn.jpg
Once the requirements and project scope are clearly set, it's important to operate within those boundaries and not fold in additional enhancements that will affect timelines and deliverables. Enhancement requests usually come from end users. If the request is easy to satisfy, like altering a screen layout or adding a data field edit, it likely can be added without much impact. But if the enhancement impacts multiple programs, it's time to redefine the project scope and timeline. Communicate it to all stakeholders, and make sure they are on board with any project changes. ... You can't control a change in business direction your boss makes. However, you can control your project timelines, resources, and how they will be impacted by these new priorities. When a priority change occurs, evaluate impact, and then define a new set of timelines for projects that were already underway. Communicate the impact to your staff and your superiors immediately. One mistake newer project managers make is that they are so eager to please that they try to maintain the original timelines of their projects and just add new projects.


Throwing more people at the problem no longer works

To understand how to proceed forward, we need to unpack the problem a bit. The first, and guiding factor is by understanding the nature of your business. I often go into organizations where the IT leadership has a limited understanding of their business…and not at the level they need today. To complicate matters, beyond the senior most IT leader, the level of business knowledge drops off precipitously. IT staff further into the organization know little more than what a common person knows about their company. While this has worked (marginally) in the past, it will not serve the company moving forward. We have long since passed the point where a company can function without the use of technology. Likewise, we have also passed the point where a CIO or IT leader can survive by technology knowledge alone. Hence why the value of the traditional CIO is in decline while the value of the transformational CIO is on the upswing. See my post on The difference between the traditional CIO and the transformational CIO for more specifics.


Facing competing objectives, CIOs share prioritization strategies

eisenhower matrix chart
Effective governance is not something you can do alone. You need a decision-making body to help prioritize IT investment, to establish transparency to your processes and decisions, and to share ownership of those decisions with the other decision makers. This piece is critical because face it: The “tyranny of now” is really the tyranny of your customers’ demands. And since there will never be enough money or bandwidth to meet all those demands at once, managing those expectations must be a team sport. Governance does that for you. To be proactive rather than reactive, CIOs must be realistic about capacity. Don’t promise what you can’t deliver. You can’t manage 45 concurrent major IT projects, so narrow the list and focus on 10. Is that painful? Sure, but it’s also effective. Finally, remember that flexibility is key. When something new emerges, line it up against your current top 10. If necessary, reprioritize. Governance doesn’t mean being rigid. It means being flexible. ... Of course, we need to get down to one to work efficiently and effectively as a single company. So, we held 14 workshops in the first four weeks after the acquisition, meeting with every part of the business, from accounting to legal to program execution.


Moving apps to the cloud? 3 steps to ensure good customer experiences

cloud hand touch create access secure clouds reach tech job certification
If your business is like most enterprises today, chances are good it's moving toward a best-of-breed, multi-cloud strategy. You're looking for applications best suited to the business's IT needs and want to run each of those apps in the optimal cloud environment. But if you're going to be mixing and matching cloud architectures and workloads to optimize performance, you need openness and flexibility. You need to select cloud providers and software vendors who embrace open standards, open source technologies, and who excel at ensuring cross-platform interoperability. Look for cloud providers and software companies who make it relatively easy for you to move workloads between on-premises data centers, their cloud and other clouds. Developers on your team will also be pleased with this commitment to openness. Today's developers expect to use modern, open source tools for management and customization. The last thing they want is to get boxed into using subpar, proprietary development tools simply because a software partner deems it necessary.


Cryptojacking campaign exploiting Apache Struts 2 flaw kills off the competition

Researchers from F5 Labs say the Apache bug is being used in a new cryptomining campaign which impacts Linux machines. According to the team, threat actors are harnessing PoC code for the Apache Struts 2 critical remote code execution vulnerability posted to Pastebin to infiltrate Linux systems for the purpose of mining Monero. Mining for cryptocurrency, such as Bitcoin (BTC), Ethereum (ETH), and Monero (XMR), is a completely legitimate activity which uses computing power to find virtual coins. However, when this power is taken without consent, such activities are considered cryptojacking. The most common tactic used by criminals in cryptojacking campaigns is the Coinhive script, a legitimate system which is being widely abused. In July, a massive cryptojacking campaign was uncovered in which a botnet used enslaved MikroTik routers to mine for Monero. Dubbed CroniX, the new attack exploits the Apache bug to send a single HTTP request at the same time as injecting an Object-Graph Navigation Language (OGNL) expression containing malicious JavaScript code.


How Do You Develop A Data Strategy? Here’re 6 Simple Steps That Will Help

How Do You Develop A Data Strategy? Here̢۪re 6 Simple Steps That Will Help
There are millions of ways data can help a business but, broadly speaking, they fall into two categories: one is using data to improve your existing business and how you make business decisions. The second is using data to transform your day-to-day business operations. In practice, most companies start out wanting to improve their decision making and take it from there. However, if you want to use data, you must always start with a data strategy. What data you gather and how you analyse it will depend entirely on what you’re looking to achieve – so you need to have thought about this at the outset. Having a data strategy helps the whole process run more smoothly and prepares you and your people for the journey ahead. ... Getting the key company players and decision makers involved will help you create a better data strategy overall, and getting their buy-in at this crucial early stage means they’re more likely to put all that data to good use later on. Keep in mind that, like any business improvement process, things may shift or evolve along the way.


Securing IoT devices: Fortinet's FortiNAC automates the process

Securing IoT devices: Fortinet's FortiNAC automates the process
This week, security vendor Fortinet announced its new FortiNAC solution aimed at addressing many of the limitations of current NAC products. FortiNAC came to Fortinet via the acquisition of Bradford Networks made earlier this year and fills a hole in the vendor's “Security Fabric” story that delivers consistent, end-to-end threat protection. The strength of FortiNAC is visibility and how it discovers all the endpoints. Instead of relying on a database or endpoint agents, FortiNAC is completely agentless and automates the discovery of endpoints by ingesting a wide range of data sources, such as RADIUS, SNMP, DHCP, LDAP and others, as well as behavioral information. This lets FortiNAC identify over 1,500 device types compared to other solutions that can identify 500 to 1,000. ... Also, because it pulls information from a wide range of sources, it can identify devices connected on Wi-Fi or the wired network. The majority IoT devices use Wi-Fi, which is where much of the focus has been from the NAC vendors, but the wired IoT endpoints are used widely in many verticals.


Making Change Is Not a Matter of Willpower


“Employees had to revisit their decisions about how to get to work. They could not just mindlessly repeat their old habits. The new office location turned out to be an opportunity for those with strong environmental values to take mass transit rather than drive to work every day.” Changing contexts gave people the opportunity to think about what they were doing. That changed the habit triggers, which in turn created the opportunity to change behavior. “People have challenges in changing behavior because of a misunderstanding about what controls many of our everyday actions,” she says. “Motivation and understanding just aren’t enough on their own to effect change.” Old habits can endure longer than the motivation to try something new, even for the most dedicated of employees. “As leaders, we need to ask: ‘What do I want people to do on a daily basis in this environment?’” She advises, “Understand the underlying context, and make changes needed so that the desired behavior is easy and rewarding.... Everyone responds to that. When your focus is on the behavior, you can create change that outlasts people’s old habits.”


Notes from the frontier: Modeling the impact of AI on the world economy

New research from the McKinsey Global Institute attempts to simulate the impact of AI on the world economy. First, it builds on an understanding of the behavior of companies and the dynamics of various sectors to develop a bottom-up view of how to adopt and absorb AI technologies. Second, it takes into account the likely disruptions that countries, companies, and workers are likely to experience as they transition to AI. There will very probably be costs during this transition period, and they need to be factored into any estimate. The analysis examines how economic gains and losses are likely to be distributed among firms, employees, and countries and how this distribution could potentially hamper the capture of AI benefits. Third, the research examines the dynamics of AI for a wide range of countries—clustered into groups with similar characteristics—with the aim of giving a more global view.



Quote for the day:


"When a man assumes leadership, he forfeits the right to mercy." -- Gennaro Angiulo


Daily Tech Digest - September 04, 2018

Cyber security training: Is it lacking in the enterprise? image
Everyone in an organisation who is connected to the internet should be given general cyber security training. This is “definitely lacking,” says Wool. As phishing scams – among others – surge, the untrained employee remains a constant risk to the security of their company. The level of training needs to be improved, because currently “there is a poor understanding of the basics of the threat landscape,” according to Wool. “This is something that should be taught in elementary schools. When children learn how to use Excel, PowerPoint and Google, it makes sense for them also to be trained on basic safety rules, just like crossing the street.” ... “A lot can be done and it can be effective, but it takes a very long time to put together,” explains Wool. “Think: how long did it take the human race to figure out what needs to be done to make vehicle transportation reasonably safe. Think about sidewalks, zebra crossings, highway exit and entry ramps and so on. It took 100 years from the invention of the automobile to where we are now. When it comes to safety, we can always do better.”



Multi-Clouds And Composable Infrastructure At VM World

One important trend at VM World and leading up to the show was a focus on software defined infrastructure, including what is called composable infrastructure, that allows virtualizing and addressing individual components, such as storage devices. Before VM World, Dell EMC announced their PowerEdge MX, a high performance, modular infrastructure solution that the company said will easily adapt to future technologies and server disaggregation (a term often used in composable infrastructure). Dell EMC says that the system’s kinetic infrastructure is “uniquely designed without a mid-plane, enabling support for multiple generations of technology releases—processor technologies, new storage types and new connectivity innovations—well into the future. Specifically, the absence of a mid-plane enables direct compute to I/O module connections, allowing for future technology upgrades without disrupting customer operations and without a mid-plane upgrade.


The Roadmap To Digital Manufacturing Transformation
To build a roadmap to digital transformation, more often than not companies are looking into the future, attempting to visualize where they want or need to be in twenty years, and planning backwards. For many however, a more proactive approach to planning would be to accept that “You can't know where you're going without knowing where you are now." We often talk to companies who have predictive and preventative aspirations but who still don’t have machines networked, the necessary IT infrastructure to capture and aggregate machine data, or the internal organizational resources required to decipher the data and implement continuous process changes. This roadmap should actually be quite logical at its core: let’s become as capable as we can and have all our ducks in a row to ready ourselves for the greater journey ahead. Once we’ve optimized capability, it’s time to digitize our assets, visualize our manufacturing data in real-time, and measure the success of our KPI’s using our tools. 


Fintech companies: The ideal talent pool for banks?

Partner up with fintech companies so you can leverage their talent pool. Thailand’s Bank of Ayudhya (BAY), for example, has forged relationships with 25 fintech companies so far, and expects that number to rise to 40 by the end of the year. Other banking giants in Thailand are using the same strategy. Bangkok Bank’s Executive VP Kukkong Ruckphaopunt told local media that the partnership model is a key strategy for acquiring tech talent. Wirawat Panthawangkun, Senior Executive VP of Kasikornbank, said KBank too is using a similar strategy to invest in tech firms to scout out tech talent from around the world. Banks and financial services institutions in other parts of Asia are following suit — except China who produces eight million computer science graduates every year and only needs the occasional innovator or thought leader to accelerate its tech growth.


What hiring managers want to see in data scientists’ CVs

Technical Depth - What Hiring Managers are looking for in Data Scientists CVs?
Some candidates focus on who they reported to, others focus on the accuracy and/or complexity of the models they built, while others only mention the types of projects they worked on. ... My ethos, which is essential in a commercial environment, is to always start with the simplest possible model and only optimise and/or add complexity if/as required. This is precisely what the Lean Startup framework mandates and is precisely what we do in my Data Science team at Royal Mail. This is because you would usually hit diminishing returns as you continue to optimise and/or add complexity to a model, and the key is to know when your model is good enough to have a tangible business impact, and then deliver it, realise the value and move on to the next most crucial problem. So ideally, in the work experience section of the CV, I would like to see multiple impact statements, at least one for each Data Science role the candidate has held. This would give me confidence that the candidate has good commercial awareness and is worth investing in, as I can expect a good ROI.


Microservices in a Post-Kubernetes Era


On cloud native platforms, observability is not enough. A more fundamental prerequisite is to make microservices automatable, by implementing health checks, reacting to signals, declaring resource consumption, etc. It is possible to put almost any application in a container and run it. But to create a containerized application that can be automated and orchestrated effectively by a cloud-native platform requires following certain rules. Following these principles and patterns, will ensure that the resulting containers behave like a good cloud-native citizen in most container orchestration engines, allowing them to be scheduled, scaled, and monitored in an automated fashion. Rather than observing what is happening in a service, we want the platform to detect anomalies and reconcile them as declared. Whether that is by stopping the directing of traffic to a service instance, restarting, scaling up and down, or moving a service to another healthy host, retrying a failing request, or something else, this doesn’t matter.


Card-Skimming Malware Campaign Hits Dozens of Sites Daily

Card-Skimming Malware Campaign Hits Dozens of Sites Daily
Websites don't necessarily catch on quickly after an infection. "The average recovery time is a few weeks, but at least 1,450 stores have hosted the magentocore[dot]net parasite during the full past six months," de Groot writes. Attackers often execute a brute-force attack against a Magento control panel, de Groot says. And attackers are clever: Their code can remove other malicious code that's already in a Magento installation and is also designed to hide its tracks. The malicious code does that via a backdoor included in a cron.php file placed by attackers periodically downloads "malicious code, and, after running, delete itself, so no traces are left," he writes. The code also changes the password for registered Magento users to "how1are2you3," de Groot writes. ... It's best to nuke infected installations and restart, he says. "Revert to a certified-safe copy of the codebase, if possible," de Groot writes. "Malware is often hidden in default HTML header/footers, but also in minimized, static JavaScript files, hidden in deep in the codebase."


A glimpse into the dark underbelly of cryptocurrency markets


What is the business model of the coin rankings sites? Sites like CoinMarketCap, CoinGecko, CoinRanking, Cryptoslate, CryptoCoinRankings, CoinCodex, CryptoCoinCharts, (et al.) sell ads, and in some cases, insert affiliate links to the exchanges. Some of them will sell blended pricing APIs to more sophisticated traders who want a reliable price feed. Many if not most exchanges have affiliate schemes, and referral links (“reflinks”) can be a lucrative source of revenue if you are the intermediary between active traders and exchanges. Sometimes rankings sites win doubly by accepting payment for banner ads for exchanges or trading venues, and then including their own affiliate links in the ad itself. It’s good money if you can get it. Investors go to these sites to find links to exchanges where they can trade their coins of choice, especially if they are smaller projects and do not have many points of liquidity. Since the rankings sites are the ports of call for investors, they have an almost captive audience and can easily monetize with an affiliate link.


Bitcoin Gold delisted from major cryptocurrency exchange after refusing to pay hack damages

bitcoingold.png
The hack at the center of this dispute took place between May 18 and 22, according to an incident response report published this May. The BTG team says the hack was a combination between a 51% attack and a double-spend attack. BTG experts said hackers rented servers through the NiceHash cryptocurrency mining market to overwhelm the Bitcoin Gold network and take control of more than half the BTG network computational hashrate. This is what cryptocurrency experts call a "51% attack," a dangerous scenario that grants attackers the ability to modify transaction details on the entire Bitcoin Gold network. The BTG team says that during the 3.5 days attackers overwhelmed the Bitcoin Gold network, hackers deposited large quantities of Bitcoin Gold funds at cryptocurrency trading platforms. Seconds after these deposits, hackers would convert the funds into another cryptocurrency and transfer the money to new accounts at other exchanges.


Google and Mastercard cut a secret deal to track retail sales data

A Google spokeswoman declined to comment on the partnership with Mastercard, but addressed the ads tool. "Before we launched this beta product last year, we built a new, double-blind encryption technology that prevents both Google and our partners from viewing our respective users’ personally identifiable information,” the company said in a statement. “We do not have access to any personal information from our partners’ credit and debit cards, nor do we share any personal information with our partners.” The company said people can opt out of ad tracking using Google’s “Web and App Activity” online console. Inside Google, multiple people raised objections that the service did not have a more obvious way for cardholders to opt out of the tracking, one of the people said. Seth Eisen, a Mastercard spokesman, also declined to comment specifically on Google. But he said Mastercard shares transaction trends with merchants and their service providers to help them measure "the effectiveness of their advertising campaigns.”



Quote for the day:


"Stressing output is the key to improving productivity, while looking to increase activity can result in just the opposite.” -- Paul Gauguin


Daily Tech Digest - September 03, 2018

Taking the pulse of machine learning adoption

ml-recorded-future.png
The least surprising part of the survey is how respondents categorized their organizations' experience with ML: roughly half are in beginners in exploration phase who are just starting to investigate ML. The remainder -- early adopters with roughly 2 years of ML experience and "sophisticated" organizations with at least 5 years or more accounted for 36% and 15%, respectively. Our take is that if you blew out the survey to a totally blind sample taken from the general population, those numbers would drop considerably. Nonetheless, we'd surmise that these organizations, by virtue of their budgeting for IT/data or analytics-related learning are among those who will be spending the lion's share on IT -- and AI and ML in particular. In the interest of full disclosure, these results are of more than passing interest to us because of the primary research that we're conducting for the day job -- Ovum research jointly sponsored with Dataiku on the people and process side of AI, where we'll be presenting the results at the Strataconference next month.


The Moral Responsibility of Social Networks
How can social media outlets better tune their algorithms? It's a challenging technical problem, but it would also require a willingness to forgo ad revenue that plays on the back of intentionally manipulative or offensive content. There are also battles to be waged against crafty legitimate users who post edgy content that constantly skirts the boundaries of terms of service. As an example, Twitter struggled internally with how to handle right-wing commentator Alex Jones. But the decisions over Jones and lesser firebrands shouldn't be difficult. Neither Twitter nor Facebook or any other company would allow a speech in their corporate headquarters that, for example, employs racist dog whistles or subtly encourages aggression against refugees. And online, their policies should be no different. Such censorship would raise ire, of course. Just a handful of social media outlets have become the main channels for distributing information. Drawing up guidelines for acceptable content isn't difficult, but it is hard to evenly apply them.



For CIOs and CISOs security decision is no less than a dilemma

Just imagine the scene through the eyes of any CIO, CISO or CSO and most would agree it’s certainly a big dilemma – if not done in a right way then it could detrimental in its own way.  “Exactly, of course we know that is the dilemma and what should be right the (security) approach – is what we are saying,” said Bhaskar Bakthavatsalu, Managing Director – Check Point, a cybersecurity solutions company, which is known for firewall technology.  More than a thousand security vendors to deal, a wide security technology products and solutions to choose, putting security controls to match unique needs in the organisation and business domains, and adhering to government and industry regulations plus distinctive business demands. ... On top of that, there are these continuous cyber threats and unknown sophisticated virus and malware attacks emerging almost every day from anonymous sources and cybercriminals operating from untraceable locations on the earth.


Most UK businesses are not insured against security breaches and data loss, says study

Most UK businesses are not insured against breaches and data loss image
“Third party risk is an interesting topic for cyber insurance underwriting that will certainly evolve as this space matures. Currently cyber insurance underwriting is more focused on the entities themselves being insured, however underwriting takes numerous variables into consideration, and the third-party risk will certainly be a factor for the underwriting process, in particular for larger enterprises.” “Security ratings is one of many variables utilised in the underwriting process. Things such as the company itself, the overall industry risk, responses from questionnaires issued, etc. are all factored in, in addition to security ratings. Each area is weighted accordingly to the overall risk being assessed. As the security ratings industry matures, more weight will certainly be lent to the information security ratings provides. When it comes to SMBs, insurers are less focused on assessing the individual risk of each individual company and more on managing the overall risk of the portfolio”


Difference Between UX and UI Design

Difference between the UX and UI
Years ago, we had doctors - just doctors. They practiced every kind of medicine, had small offices, and even made house calls. We called them general practitioners. As the field of medicine grew and research and knowledge expanded, doctors began to specialize. Now we go to one doctor for ear, nose and throat issues; we go to another for skin issues; we go to others for issues with any of our major internal organs. ... So, now we have UX and UI designers, each with their specific facets of web design. These terms are often used interchangeagably, however, and there is some disagreement as to what exactly each specialty entails. So here is a basic definition of each. While UX designers do a lot in the area of how users interact with products and services and designing that flow of interaction, but they do not focus on marketing or sales. They do, however, work with marketing departments, in, for example, the sequence in which products and services may be presented.


Understanding Type I and Type II Errors

In statistical test theory, the notion of statistical error is an integral part of hypothesis testing. The statistical test requires an unambiguous statement of anull hypothesis, for example, "this person is healthy", "this accused person is not guilty" or "this product is not broken". The result of the test of the null hypothesis may be positive or may be negative. If the result of the test corresponds with reality, then a correct decision has been made. However, if the result of the test does not correspond with reality, then two types of error are distinguished: type I errorand type II error. ... Type I and type II errors are highly depend upon the language or positioning of the null hypothesis. Changing the positioning of the null hypothesis can cause type I and type II errors to switch roles. It’s hard to create a blanket statement that a type I error is worse than a type II error, or vice versa. The severity of the type I and type II errors can only be judged in context of the null hypothesis, which should be thoughtfully worded to ensure that we’re running the right test.


Data breach reports see 75% increase in last two years

Data breach reports see 75% increase in last two years image
“Reporting data breaches wasn’t mandatory for most organisations before the GDPR came into force,” explained Andrew Beckett,  “so while the data is revealing, it only gives a snapshot into the true picture of breaches suffered by organisations in the UK. “The recent rise in the number of reports is probably due to organisations’ gearing up for the GDPR as much as an increase in incidents. Now that the regulation is in force, we would expect to see a significant surge in the number of incidents reported as the GDPR imposes a duty on all organisations to report certain types of personal data breach. “We would also expect to see an increase in the value of penalties issued as the maximum possible fine has risen from £500,000 to €20 million or 4 per cent of annual turnover, whichever is higher. The ultimate impact is that businesses face not only a much greater financial risk around personal data, but also a heightened reputational risk.”


5 Lessons I Have Learned From Data Science In Real Working Experience

Be like a Detective. Carry out your investigation with laser focus on details. This is particularly important during the process of data cleaning and transformation. Data in real life is messy and you must have the capability to pick up signals from the ocean of noise before you get overwhelmed. Therefore, having a detail-oriented mindset and workflow is of paramount importance to be successful in Data Science. Without a meticulous mindset or a well-structured workflow, you might lose your direction in the midst of diving into exploring your data. You may be diligently performing Exploratory Data Analysis (EDA) for some time but still may not have reached any insights. Or you may be consistently training your model with different parameters to hopefully see some improvement. Or perhaps, you may be celebrating the completion of arduous data cleaning process, when the data could in fact be not clean enough to feed to your model.


Is It Time to Replace Your Network's Annual Check-Up?

shutterstock 667627561
The evolution toward a more holistic, personalized health maintenance program will create an explosion of data. In fact, the amount of worldwide health care data is expected to grow to 25,000 petabytes in 2020. This will put more pressure on our communication networks. As a result, it's imperative to ensure the "health" of the data network is robust and that sharing patient information amongst all stakeholders is possible. Much like the annual physical health checkup, the traditional approach of many network managers was to conduct infrequent network performance checkups and to take action only when there is an unexpected outage or issue. In today's on-demand world where users expect their communications to be available 24/7, this is no longer acceptable. If network managers look only for alarms, they see just a fraction of the information available at any given moment and lose the ability see the complete network health picture. This can restrict how much preventive action can be taken to avoid network disruption.


The pressure's on: digital transformation seen as a make-or-break proposition for IT managers

As with many technology trends over the years, many executives rush to buy the shiny new gadgets, expecting them to work miracles on their calcified, customer-repelling processes. Digital transformation -- and all the technologies associated with it -- is only the latest example. Companies attempt to put digital approaches in place, thinking they can do things cheaper, without funding the essential background work, such as data integration. But the competitive pressure is intense: 85 percent said disruption in their industry has accelerated over the past 12 months. Thirty-five percent say the primary driver for digital transformation is advances made by competitors, 23 percent changes in regulation, and 20 percent pressure from customers - "meaning digital transformation is mostly being driven by reactive needs, instead of proactive ideas," the survey's authors conclude.



Quote for the day:


"If You Don't Like Your Situation, Take Actions To Change It, Hope Is Not A Strategy." -- Gordon TredGold


Daily Tech Digest - September 02, 2018

Strategies for Improving Smart City Logistics

Strategies for Improving Smart City LogisticsEfficient, timely and accurate delivery is a necessity to retailers and logistics providers survival in an Amazon Prime world. Smart Cities goals of livability and sustainability means they want less trucks, congestion and pollution. For all stakeholders to achieve their goals, the only answer is to work together. If cities, retailers, and logistics providers work together, collaboration and digital solutions can help resolve traditional challenges of last-mile logistics and improve the livability and sustainability of cities. ... In Europe, where they have higher urbanization, more aggressive goals for CO2 reduction, and the width of the streets in its older cities are less equipped to handle a rise in urban freight transport, there have been many initiatives and cities working on this issue. The European Union has been co-funding and working together more collaboratively with cities and partners such as logistics companies like TNT and DHL, as well as, local retailers in the creation of consolidation centers and more sophisticated delivery practices.


Bank Products Are Dead: Long Live Experiences


By 2020 we’re going to see 50 billion new devices connected to the Internet — everything will be smart. Smart Fridges that order your groceries or can tell you what you can cook with the remaining items inside, sensors you wear on your wrist or in your clothes that monitor your health and activity, cars that will talk to each other and drive themselves, smart mirrors that will show you how you look in that new shirt, robot drones and pods that will deliver you groceries or Amazon order — the world will be filled with smart stuff. We live in a world where new technology emerges and is adopted in months today, versus the years it took previously. It’s all moving so quickly. As more and more technology is injected into our lives, we become acclimatized and just accept the increased role technology has to play. This is known as technology, adoption diffusion. As we move to this technology-optimized world, we’ll start to redesign where and how humans fit in society. Banking will be embedded in our life.


This mind-reading AI can see what you're thinking - and draw a picture of it

Chilean software engineer Jorge Alviarez, one of the creators of Lifeware's program called LifewareIntegra that allows handicapped people to use computers, places head sensors on Jenifer Astorga (26), who suffers from quadriplegia, during a training session for her in Valparaiso city, about 75 miles (121 km) northwest of Santiago, January 18, 2011. Jenifer is the first to use the LifewareIntegra system developed by a group of computer science students at the Federico Santa Maria Technical University that permits quadriplegics to use a computer through brain activity picked up by sensors on the head device. REUTERS/Eliseo Fernandez (CHILE - Tags: SCI TECH EDUCATION SOCIETY)
While headlines around the world have screamed out that AI can now read minds, the reality seems to be more prosaic. Computers are not yet able to anticipate what we think, feel or desire. As science writer Anjana Ahuja remarked in the Financial Times, rather than telepathy, “a more accurate, though less catchy, description would be a ‘reconstruction of visual field’ algorithm”. Most of the research so far has been aimed at deciphering images of what subjects are looking at or, in limited circumstances, what they are thinking about. Studies have previously focused on programs producing images based on shapes or letters they had been taught to recognize when viewed through subjects’ minds. However, in one recent piece of research, from Japan’s ATR Computational Neuroscience Laboratories and Kyoto University, scientists said that not only was a program able to decipher images it had been trained to recognize when people looked at them but: “our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars.”


Microsoft officially christens 'Redstone 5' as the Windows 10 October 2018 Update

windows10october2018update.jpg
The October 2018 Update rollout will likely be staggered, as in past feature releases, with machines known to be able to best handle the new bits getting them pushed to them first. Microsoft also will likely begin rolling out the server complements to the October 2018 Update -- Windows Server 1809 and Windows Server 2019 -- on the same day in October as the client build goes live. The part of today's announcement that is a bit more surprising is that Microsoft is still saying that the October 2018 Update will be going to the "nearly 700 million devices" running Windows 10. Microsoft has been using this same 700 million figure since March 2018 and hasn't provided an updated momentum figure. ... The Windows 10 October 2018 Update will include the Cloud Clipboard, dark-mode File Explorer option, a number of new Notepad features and other tweaks and updates. It also will deliver a number of new security and enterprise features, as well as a new Windows 10 Enterprise Remote Sessions edition. Microsoft will likely detail these enterprise features at its Ignite show.


Want To Survive & Thrive With AI?…Then Mind The Skills Gap

“The battle for diversity is vital, just from the perspective of finding the best talent in the widest possible pool. Demystifying the idea that AI is something very difficult is crucial, you do not need to code like Sergey Brin, the co-founder of Google. Being unafraid of a strange discipline is key. There is a huge gap between STEM and the arts and we need each other,” says Dr Lauterbach. ... “The phrase Artificial Intelligence is misleading because everything happens by human design. Human beings pick big data sets, algorithms, methodology and processing hardware.” According to Dr Lauterbach, if algorithms are not created to be inclusive, they could contribute to inequalities and thus would not be effective in helping the world. “AI has a capability to scale everything we are about as humans,” she says. “So if you have a team of only white male developers or only Chinese male developers, then you will get a data set or some algorithms that are wired according to the preferences, habits and thinking processes of those groups.”


The Modern Marketing Model for the Financial Industry


When we consider the new complexities of modern financial services marketing, it is best to integrate both traditional and digital marketing in a manner that achieves synergistic benefits. By fusing together both classical and digital marketing, organizations are in a better position to identify capability gaps placing a focus on where and how to move forward. The chart below from eConsultancy helps to visualize the required components. This model is a natural progression from previous models used by marketers. For instance, in the 1960s, the prevalent marketing model was the ‘4Ps’ (Product, Price, Place and Promotion). In the 1980s, there were three additional Ps added (People, Process and Physical) reflecting increased customer interaction and the beginning of targeting. In the 1990s, ROI entered the equation, as did the ongoing increase in importance of targeting (the ‘4Cs’ included Consumer, Cost, Communication and Convenience). The new marketing model highlights the importance of customer insight, analytics, brand and customer experience.


7 factors that will push implementation of AI in healthcare


Because artificial neural networks of deep learning mirror the brain’s ability to learn difficult patterns, Hinton noted that the networks also model complicated between inputs and outputs used for predicting future medical events from past events or large data sets.  “As data sets get bigger and computers become more powerful, the results achieved by deep learning will get better, even with no improvement in the basic learning techniques, although these techniques are being improved,” Hinton wrote. A remaining challenge artificial intelligence has yet to overcome, Hinton wrote, is detecting patterns in unlabeled data in the process called “unsupervised learning."  “As new unsupervised learning algorithms are discovered, the data efficiency of deep learning will be greatly augmented in the years ahead, and its potential applications in healthcare and other fields will increase rapidly,” according to Hinton.  Overall, clinicians and physicians should be aware of the challenges that come with implementing AI and deep learning into everyday workflow and know how to efficiently approach it


 web-based cryptojacking
By taking as an example the 10 most profitable sites that hold mining code, the researchers estimated that they are able to generate between 0.53 and 1.51 Monero per day, i.e., between 119 to 340 USD (at the time). While it’s not much, given that the revenue is achieved without any cost to the miner, this is still a notable profit. “However, we conclude that current cryptojacking is not as profitable as one might expect and the overall revenue is moderate,” the researchers noted. How to stop it? The researchers found that existing blacklist-based approaches used by web browsers are trivial to evade and the actual lists outdate fast. Instead of static blacklists, they leveraged a set of heuristic indicators for candidate selection and a dedicated performance measurement step for precise miner identification. But, however suitable this approach is, they pointed out that it likely works well only because today’s mining operators don’t anticipate it. As the only reliable indicator of active mining is prolonged and excessive CPU usage, their advice for browser makers is to implement CPU allotments for tabs.


artificial intelligence / machine learning Another sticking point the panel discussed was the issue of maturity. That is, organizations have to ask themselves whether they truly have the ability to define, develop and manage their AI investments in a way that will create value. After all, AI isn’t some piece of plug-and-play software you can just flip on and start using. There are significant process changes that need to occur, in technology systems and human employees alike. Security should also be of chief concern. AI’s impact on security can be profound, which means you must determine what controls and protections will be necessary from the very beginning to ensure your sensitive data (sources and outcomes) remain secure. When there’s confusion and disagreement over how to proceed, it can lead to a case of analysis paralysis. So before charging full steam ahead with AI, companies should realistically assess their own readiness to do so. Thankfully, the IP Soft AI Pioneers Forum is now working to develop a universal AI maturity model that may be helpful to companies in these cases.


Focusing on machine learning 2020: augmentation instead of automation


The holy grail of augmentation can be easily seen as the pursuit of creativity but there are many other areas of interest as well. Strategic decision making, such as choosing where to build new skyscrapers, where to build new infrastructure (bridges, roads, facilities), what type of aircraft should we buy to maximize profitability and growth and what routes should we fly —counting in sustainability. These questions are still largely thought out with excel sheets, BI-tools and GIS-systems, and maybe some legacy statistics software (SAS, SPSS) with some custom analysis. While that may be sufficient for some industries, many of these problems have so many attributes that it’s impossible for us as humans to make optimal decisions — hence welcoming optimization and machine learning to help as augmenting features of decision making. And despite the fact that it’s still quite early to tell, deep learning may well be of use here



Quote for the day:

"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis

Daily Tech Digest - September 01, 2018

Human intelligence and AI are vastly different — so let’s stop comparing them
Let’s start with the data part. Contrary to computers, humans are terrible at storing and processing information. For instance, you must listen to a song several times before you can memorize it. But for a computer, memorizing a song is as simple as pressing “Save” in an application or copying the file into its hard drive. Likewise, unmemorizing is hard for humans. Try as you might, you can’t forget bad memories. For a computer, it’s as easy as deleting a file. When it comes to processing data, humans are obviously inferior to AI. In all the examples iterated above, humans might be able to perform the same tasks as computers. However, in the time that it takes for a human to identify and label an image, an AI algorithm can classify one million images. The sheer processing speed of computers enable them to outpace humans at any task that involves mathematical calculations and data processing. However, humans can make abstract decisions based on instinct, common sense and scarce information. A human child learns to handle objects at a very young age. For an AI algorithm, it takes hundreds of years’ worth of training to perform the same task.



What is Industry 5.0?


The handshake between a human being and a robot symbolized of the new reality, even by knowing that it will not be the reality in the future, as most automation, machine intelligence and even robots are working in the background, supporting the workforce or taking on large portions of work, like in production and manufacturing. Investment banking systems are already in use since more than a decade to negotiate and define the share price and sell- / buy-decisions within Nano-seconds independent form any human interaction. The next wave of industrial revolution needs to define, how we collaborate and how we define the rules between human and machine interaction. When artificial intelligence is taking decisions, like we could see in an impressive example during Google I/O 2018 presented by Sundar Pichai, CEO of Google, where a voice assistant called to make an appointment and the woman answering the call didn’t have a chance to recognize, that she was speaking to a robot.


Why Cybersecurity Is Becoming A Top-Priority Investment


Using tools like Privnote is one way to securely transfer valuable data. Privnote is a platform that securely transfers data online and then self-destructs. For protecting large amounts of data, the smartest way to go about finding the right cybersecurity company is to ask around for referrals. You’re better off doing this than making a blind Google search and hoping for the best. If a cybersecurity company is good enough for your colleagues and peers, then it will likely be good enough for your business. My business develops engaging content that attracts the millennial generation, which means we launch a considerable amount of online advertising campaigns. Some of these campaigns require creating B2B accounts with other platforms, so I’m not only protecting my clients’ information, but also my own. Additionally, your product itself needs to be protected. Cyber thieves will try to steal your products’ Amazon standard identification number code and profit from your online sales.


Empowering executives with data security effectiveness evidence

Your leaders are making decisions predicated on these non-security measures every day to increase value for their shareholders, address stakeholder requirements, and mitigate business risks. Security is simply another variable in the business risk equation. In fact, your security program isn’t about security risk in and of itself, but rather, the financial, brand, and operational risk from security incidents. One area where the need for security effectiveness evidence is profusely obvious is around rationalization. For example, many auditors no longer ask, “Do you have security tools in place to mitigate risk?” because the answer is always, “Yes, but we need more tools, training, and people anyhow.” Now auditors are asking for rationalization in terms of, “Can you prove, with quantitative measures, that our security tools are adding value? And can you supply proof regarding the necessity for future security investment?”


Using Neuroscience to Make Feedback Work and Feel Better


Modern humans base their decisions on many of the same pro-social, consensus-building impulses. We make polite chitchat at work, even in our most antisocial states, so others will see us as friendly. We avoid talking to the attractive stranger at the bar because something deep and ancient in us registers the possibility of rejection as a matter of life and death. When neuroscientists conduct brain scans of people exposed to social threats, such as a nasty look or gesture, the resulting images look just like the scans of people exposed to physical threats. Our bodies react in much the same ways. Our faces flush, our hearts race, and our brains shut down. No matter if we’re giving a speech to thousands or coming face-to-face with a jungle cat, our body’s response is the same: We want out. Feedback conversations, as they exist today, activate this social threat response. In West and Thorson’s study, participants’ heart rates jumped as much as 50 percent during feedback conversations.


Big Data And ML: A Marriage Between Giants!


We live in an age where ‘information’ is packaged, shared and valued, quite literally, more than anything else! And, there is enhanced engagement in this information exchange. All this activity is resulting in tons of data being pumped out — Big Data. To those listening, this data can be harnessed and mined for answers. Whether it is regarding business profitability, marketing strategy or identifying and mitigating risk, companies can ascertain any and every detail. Aiding in these pursuits is the growing computational power of systems. There is abundant storage available for all the data. In-memory is adding to the speed of performance. Cloud and pay-as-you-go models are making engagements feasible. And, the economies of scale are making these systems highly accessible and affordable. High-tech companies, technological corporations, and data scientists, all, predict the remarkable, dominant and disruptive power of ML and Big Data combined.


Confronting the Greatest Risks To Financial Services’ Future

In a behavioral study done among international bankers, it was found that bank executives take significantly less risk when reminded of their role as bankers. In the study, they invested about 20% less in the risky asset category relative to the control group. In other words, when they were ‘acting in a ‘banker mentality’ – reminded about banking, and their bank, and their banking careers – they will be more conservative than they would otherwise be. When the same people were not reminded of their banker role, they took greater risk, indicating that the risk in banking doesn’t come from culture but from structure. The question become, is there something about the culture and structure of banks that makes bankers risk-averse? Or is this something that is just evident now? From my perspective, I have seen that “bankers being bankers” tends to result in lower acceptance of change; an adherence to legacy policies, processes, and thought patterns; and the resultant risk of not being able to keep up with consumer demands.


Thinking outside-of-the-black-box of machine learning


“Speech separation or overlapped speech recognition is paramount for far-field conversational speech recognition,”, said Yoshioka. “It has a wide range of potential applications, such as meeting assistance and medical dialog transcription. As computers begin to sense the world better and get smarter, they will be able to provide us more effective assistance and help us focus on more important things.” In the accompanying paper titled, “Layer Trajectory LSTM”, Microsoft AI researchers Jinyu Li and fellow researchers Changliang Liu and Yifan Gong, successfully reassessed the potential for innovation in traditional time-based LSTM networks. Jinyu Li described his conceptual approach saying, “Sometimes deep learning is treated as a black box and researchers just keep trying different model structures without taking a couple of steps back and thinking about why the models work – and what else might be possible.” Traditional LSTM networks in recurrent neural networks (RNNs), well-suited to classifying and making predictions based on time series data such as speech


Eclipse Releases Versions 1.4 and 2.0 of MicroProfile

Both of these Eclipse projects have merit and are making progress in their respective domains, with MicroProfile technologies building upon those being contributed to Jakarta EE. But are the projects themselves ready to be merged? IMHO, no. MicroProfile has grown tremendously from its humble beginnings. We have several new component features and versions that extend the Enterprise Java programming model for microservices development. And we have done this in a relatively short amount of time: Six major MicroProfile releases with sixteen component releases in less than two years. Due to the enormity and complexities of this move, Jakarta EE is not yet ready to match this rate of progress. And, as Jakarta EE has not yet completed the definition of its specification process, it is not yet ready to accept the fast-paced release cycle required by MicroProfile. The big difference here is that MicroProfile has never tried to be a standards body.


Think AI Is Too Scary? This Expert Wants to Calm Your Fears


The first thing to tell you is that I really see this as a listening experience, at least initially, so I can be responsive to what the community is looking for. Having said that, one big area is to enhance and strengthen AAAI links with industry. Our annual conference has a lot of participants from industry but I'd like to see more presence from industry research labs. Traditionally it's been a very academic conference but today, many professors spend time in industry. We need to give that sector a lot more presence. That's a major focus. I am also looking to include underserved communities in our membership to diversify it strongly; launch K-12 initiatives to grow the pipeline; and ensure we include professionals in other areas. ... We need to look at employing ethics within AI at every level: how systems need to be designed with different mechanisms to respond ethically to events; understand when an AI system could do harm; and so on.



Quote for the day:


"The great leaders have always stage-managed their effects." -- Charles de Gaulle