Daily Tech Digest - September 03, 2018

Taking the pulse of machine learning adoption

ml-recorded-future.png
The least surprising part of the survey is how respondents categorized their organizations' experience with ML: roughly half are in beginners in exploration phase who are just starting to investigate ML. The remainder -- early adopters with roughly 2 years of ML experience and "sophisticated" organizations with at least 5 years or more accounted for 36% and 15%, respectively. Our take is that if you blew out the survey to a totally blind sample taken from the general population, those numbers would drop considerably. Nonetheless, we'd surmise that these organizations, by virtue of their budgeting for IT/data or analytics-related learning are among those who will be spending the lion's share on IT -- and AI and ML in particular. In the interest of full disclosure, these results are of more than passing interest to us because of the primary research that we're conducting for the day job -- Ovum research jointly sponsored with Dataiku on the people and process side of AI, where we'll be presenting the results at the Strataconference next month.


The Moral Responsibility of Social Networks
How can social media outlets better tune their algorithms? It's a challenging technical problem, but it would also require a willingness to forgo ad revenue that plays on the back of intentionally manipulative or offensive content. There are also battles to be waged against crafty legitimate users who post edgy content that constantly skirts the boundaries of terms of service. As an example, Twitter struggled internally with how to handle right-wing commentator Alex Jones. But the decisions over Jones and lesser firebrands shouldn't be difficult. Neither Twitter nor Facebook or any other company would allow a speech in their corporate headquarters that, for example, employs racist dog whistles or subtly encourages aggression against refugees. And online, their policies should be no different. Such censorship would raise ire, of course. Just a handful of social media outlets have become the main channels for distributing information. Drawing up guidelines for acceptable content isn't difficult, but it is hard to evenly apply them.



For CIOs and CISOs security decision is no less than a dilemma

Just imagine the scene through the eyes of any CIO, CISO or CSO and most would agree it’s certainly a big dilemma – if not done in a right way then it could detrimental in its own way.  “Exactly, of course we know that is the dilemma and what should be right the (security) approach – is what we are saying,” said Bhaskar Bakthavatsalu, Managing Director – Check Point, a cybersecurity solutions company, which is known for firewall technology.  More than a thousand security vendors to deal, a wide security technology products and solutions to choose, putting security controls to match unique needs in the organisation and business domains, and adhering to government and industry regulations plus distinctive business demands. ... On top of that, there are these continuous cyber threats and unknown sophisticated virus and malware attacks emerging almost every day from anonymous sources and cybercriminals operating from untraceable locations on the earth.


Most UK businesses are not insured against security breaches and data loss, says study

Most UK businesses are not insured against breaches and data loss image
“Third party risk is an interesting topic for cyber insurance underwriting that will certainly evolve as this space matures. Currently cyber insurance underwriting is more focused on the entities themselves being insured, however underwriting takes numerous variables into consideration, and the third-party risk will certainly be a factor for the underwriting process, in particular for larger enterprises.” “Security ratings is one of many variables utilised in the underwriting process. Things such as the company itself, the overall industry risk, responses from questionnaires issued, etc. are all factored in, in addition to security ratings. Each area is weighted accordingly to the overall risk being assessed. As the security ratings industry matures, more weight will certainly be lent to the information security ratings provides. When it comes to SMBs, insurers are less focused on assessing the individual risk of each individual company and more on managing the overall risk of the portfolio”


Difference Between UX and UI Design

Difference between the UX and UI
Years ago, we had doctors - just doctors. They practiced every kind of medicine, had small offices, and even made house calls. We called them general practitioners. As the field of medicine grew and research and knowledge expanded, doctors began to specialize. Now we go to one doctor for ear, nose and throat issues; we go to another for skin issues; we go to others for issues with any of our major internal organs. ... So, now we have UX and UI designers, each with their specific facets of web design. These terms are often used interchangeagably, however, and there is some disagreement as to what exactly each specialty entails. So here is a basic definition of each. While UX designers do a lot in the area of how users interact with products and services and designing that flow of interaction, but they do not focus on marketing or sales. They do, however, work with marketing departments, in, for example, the sequence in which products and services may be presented.


Understanding Type I and Type II Errors

In statistical test theory, the notion of statistical error is an integral part of hypothesis testing. The statistical test requires an unambiguous statement of anull hypothesis, for example, "this person is healthy", "this accused person is not guilty" or "this product is not broken". The result of the test of the null hypothesis may be positive or may be negative. If the result of the test corresponds with reality, then a correct decision has been made. However, if the result of the test does not correspond with reality, then two types of error are distinguished: type I errorand type II error. ... Type I and type II errors are highly depend upon the language or positioning of the null hypothesis. Changing the positioning of the null hypothesis can cause type I and type II errors to switch roles. It’s hard to create a blanket statement that a type I error is worse than a type II error, or vice versa. The severity of the type I and type II errors can only be judged in context of the null hypothesis, which should be thoughtfully worded to ensure that we’re running the right test.


Data breach reports see 75% increase in last two years

Data breach reports see 75% increase in last two years image
“Reporting data breaches wasn’t mandatory for most organisations before the GDPR came into force,” explained Andrew Beckett,  “so while the data is revealing, it only gives a snapshot into the true picture of breaches suffered by organisations in the UK. “The recent rise in the number of reports is probably due to organisations’ gearing up for the GDPR as much as an increase in incidents. Now that the regulation is in force, we would expect to see a significant surge in the number of incidents reported as the GDPR imposes a duty on all organisations to report certain types of personal data breach. “We would also expect to see an increase in the value of penalties issued as the maximum possible fine has risen from £500,000 to €20 million or 4 per cent of annual turnover, whichever is higher. The ultimate impact is that businesses face not only a much greater financial risk around personal data, but also a heightened reputational risk.”


5 Lessons I Have Learned From Data Science In Real Working Experience

Be like a Detective. Carry out your investigation with laser focus on details. This is particularly important during the process of data cleaning and transformation. Data in real life is messy and you must have the capability to pick up signals from the ocean of noise before you get overwhelmed. Therefore, having a detail-oriented mindset and workflow is of paramount importance to be successful in Data Science. Without a meticulous mindset or a well-structured workflow, you might lose your direction in the midst of diving into exploring your data. You may be diligently performing Exploratory Data Analysis (EDA) for some time but still may not have reached any insights. Or you may be consistently training your model with different parameters to hopefully see some improvement. Or perhaps, you may be celebrating the completion of arduous data cleaning process, when the data could in fact be not clean enough to feed to your model.


Is It Time to Replace Your Network's Annual Check-Up?

shutterstock 667627561
The evolution toward a more holistic, personalized health maintenance program will create an explosion of data. In fact, the amount of worldwide health care data is expected to grow to 25,000 petabytes in 2020. This will put more pressure on our communication networks. As a result, it's imperative to ensure the "health" of the data network is robust and that sharing patient information amongst all stakeholders is possible. Much like the annual physical health checkup, the traditional approach of many network managers was to conduct infrequent network performance checkups and to take action only when there is an unexpected outage or issue. In today's on-demand world where users expect their communications to be available 24/7, this is no longer acceptable. If network managers look only for alarms, they see just a fraction of the information available at any given moment and lose the ability see the complete network health picture. This can restrict how much preventive action can be taken to avoid network disruption.


The pressure's on: digital transformation seen as a make-or-break proposition for IT managers

As with many technology trends over the years, many executives rush to buy the shiny new gadgets, expecting them to work miracles on their calcified, customer-repelling processes. Digital transformation -- and all the technologies associated with it -- is only the latest example. Companies attempt to put digital approaches in place, thinking they can do things cheaper, without funding the essential background work, such as data integration. But the competitive pressure is intense: 85 percent said disruption in their industry has accelerated over the past 12 months. Thirty-five percent say the primary driver for digital transformation is advances made by competitors, 23 percent changes in regulation, and 20 percent pressure from customers - "meaning digital transformation is mostly being driven by reactive needs, instead of proactive ideas," the survey's authors conclude.



Quote for the day:


"If You Don't Like Your Situation, Take Actions To Change It, Hope Is Not A Strategy." -- Gordon TredGold


Daily Tech Digest - September 02, 2018

Strategies for Improving Smart City Logistics

Strategies for Improving Smart City LogisticsEfficient, timely and accurate delivery is a necessity to retailers and logistics providers survival in an Amazon Prime world. Smart Cities goals of livability and sustainability means they want less trucks, congestion and pollution. For all stakeholders to achieve their goals, the only answer is to work together. If cities, retailers, and logistics providers work together, collaboration and digital solutions can help resolve traditional challenges of last-mile logistics and improve the livability and sustainability of cities. ... In Europe, where they have higher urbanization, more aggressive goals for CO2 reduction, and the width of the streets in its older cities are less equipped to handle a rise in urban freight transport, there have been many initiatives and cities working on this issue. The European Union has been co-funding and working together more collaboratively with cities and partners such as logistics companies like TNT and DHL, as well as, local retailers in the creation of consolidation centers and more sophisticated delivery practices.


Bank Products Are Dead: Long Live Experiences


By 2020 we’re going to see 50 billion new devices connected to the Internet — everything will be smart. Smart Fridges that order your groceries or can tell you what you can cook with the remaining items inside, sensors you wear on your wrist or in your clothes that monitor your health and activity, cars that will talk to each other and drive themselves, smart mirrors that will show you how you look in that new shirt, robot drones and pods that will deliver you groceries or Amazon order — the world will be filled with smart stuff. We live in a world where new technology emerges and is adopted in months today, versus the years it took previously. It’s all moving so quickly. As more and more technology is injected into our lives, we become acclimatized and just accept the increased role technology has to play. This is known as technology, adoption diffusion. As we move to this technology-optimized world, we’ll start to redesign where and how humans fit in society. Banking will be embedded in our life.


This mind-reading AI can see what you're thinking - and draw a picture of it

Chilean software engineer Jorge Alviarez, one of the creators of Lifeware's program called LifewareIntegra that allows handicapped people to use computers, places head sensors on Jenifer Astorga (26), who suffers from quadriplegia, during a training session for her in Valparaiso city, about 75 miles (121 km) northwest of Santiago, January 18, 2011. Jenifer is the first to use the LifewareIntegra system developed by a group of computer science students at the Federico Santa Maria Technical University that permits quadriplegics to use a computer through brain activity picked up by sensors on the head device. REUTERS/Eliseo Fernandez (CHILE - Tags: SCI TECH EDUCATION SOCIETY)
While headlines around the world have screamed out that AI can now read minds, the reality seems to be more prosaic. Computers are not yet able to anticipate what we think, feel or desire. As science writer Anjana Ahuja remarked in the Financial Times, rather than telepathy, “a more accurate, though less catchy, description would be a ‘reconstruction of visual field’ algorithm”. Most of the research so far has been aimed at deciphering images of what subjects are looking at or, in limited circumstances, what they are thinking about. Studies have previously focused on programs producing images based on shapes or letters they had been taught to recognize when viewed through subjects’ minds. However, in one recent piece of research, from Japan’s ATR Computational Neuroscience Laboratories and Kyoto University, scientists said that not only was a program able to decipher images it had been trained to recognize when people looked at them but: “our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed ‘reconstructs’ or ‘generates’ images from brain activity, not simply matches to exemplars.”


Microsoft officially christens 'Redstone 5' as the Windows 10 October 2018 Update

windows10october2018update.jpg
The October 2018 Update rollout will likely be staggered, as in past feature releases, with machines known to be able to best handle the new bits getting them pushed to them first. Microsoft also will likely begin rolling out the server complements to the October 2018 Update -- Windows Server 1809 and Windows Server 2019 -- on the same day in October as the client build goes live. The part of today's announcement that is a bit more surprising is that Microsoft is still saying that the October 2018 Update will be going to the "nearly 700 million devices" running Windows 10. Microsoft has been using this same 700 million figure since March 2018 and hasn't provided an updated momentum figure. ... The Windows 10 October 2018 Update will include the Cloud Clipboard, dark-mode File Explorer option, a number of new Notepad features and other tweaks and updates. It also will deliver a number of new security and enterprise features, as well as a new Windows 10 Enterprise Remote Sessions edition. Microsoft will likely detail these enterprise features at its Ignite show.


Want To Survive & Thrive With AI?…Then Mind The Skills Gap

“The battle for diversity is vital, just from the perspective of finding the best talent in the widest possible pool. Demystifying the idea that AI is something very difficult is crucial, you do not need to code like Sergey Brin, the co-founder of Google. Being unafraid of a strange discipline is key. There is a huge gap between STEM and the arts and we need each other,” says Dr Lauterbach. ... “The phrase Artificial Intelligence is misleading because everything happens by human design. Human beings pick big data sets, algorithms, methodology and processing hardware.” According to Dr Lauterbach, if algorithms are not created to be inclusive, they could contribute to inequalities and thus would not be effective in helping the world. “AI has a capability to scale everything we are about as humans,” she says. “So if you have a team of only white male developers or only Chinese male developers, then you will get a data set or some algorithms that are wired according to the preferences, habits and thinking processes of those groups.”


The Modern Marketing Model for the Financial Industry


When we consider the new complexities of modern financial services marketing, it is best to integrate both traditional and digital marketing in a manner that achieves synergistic benefits. By fusing together both classical and digital marketing, organizations are in a better position to identify capability gaps placing a focus on where and how to move forward. The chart below from eConsultancy helps to visualize the required components. This model is a natural progression from previous models used by marketers. For instance, in the 1960s, the prevalent marketing model was the ‘4Ps’ (Product, Price, Place and Promotion). In the 1980s, there were three additional Ps added (People, Process and Physical) reflecting increased customer interaction and the beginning of targeting. In the 1990s, ROI entered the equation, as did the ongoing increase in importance of targeting (the ‘4Cs’ included Consumer, Cost, Communication and Convenience). The new marketing model highlights the importance of customer insight, analytics, brand and customer experience.


7 factors that will push implementation of AI in healthcare


Because artificial neural networks of deep learning mirror the brain’s ability to learn difficult patterns, Hinton noted that the networks also model complicated between inputs and outputs used for predicting future medical events from past events or large data sets.  “As data sets get bigger and computers become more powerful, the results achieved by deep learning will get better, even with no improvement in the basic learning techniques, although these techniques are being improved,” Hinton wrote. A remaining challenge artificial intelligence has yet to overcome, Hinton wrote, is detecting patterns in unlabeled data in the process called “unsupervised learning."  “As new unsupervised learning algorithms are discovered, the data efficiency of deep learning will be greatly augmented in the years ahead, and its potential applications in healthcare and other fields will increase rapidly,” according to Hinton.  Overall, clinicians and physicians should be aware of the challenges that come with implementing AI and deep learning into everyday workflow and know how to efficiently approach it


 web-based cryptojacking
By taking as an example the 10 most profitable sites that hold mining code, the researchers estimated that they are able to generate between 0.53 and 1.51 Monero per day, i.e., between 119 to 340 USD (at the time). While it’s not much, given that the revenue is achieved without any cost to the miner, this is still a notable profit. “However, we conclude that current cryptojacking is not as profitable as one might expect and the overall revenue is moderate,” the researchers noted. How to stop it? The researchers found that existing blacklist-based approaches used by web browsers are trivial to evade and the actual lists outdate fast. Instead of static blacklists, they leveraged a set of heuristic indicators for candidate selection and a dedicated performance measurement step for precise miner identification. But, however suitable this approach is, they pointed out that it likely works well only because today’s mining operators don’t anticipate it. As the only reliable indicator of active mining is prolonged and excessive CPU usage, their advice for browser makers is to implement CPU allotments for tabs.


artificial intelligence / machine learning Another sticking point the panel discussed was the issue of maturity. That is, organizations have to ask themselves whether they truly have the ability to define, develop and manage their AI investments in a way that will create value. After all, AI isn’t some piece of plug-and-play software you can just flip on and start using. There are significant process changes that need to occur, in technology systems and human employees alike. Security should also be of chief concern. AI’s impact on security can be profound, which means you must determine what controls and protections will be necessary from the very beginning to ensure your sensitive data (sources and outcomes) remain secure. When there’s confusion and disagreement over how to proceed, it can lead to a case of analysis paralysis. So before charging full steam ahead with AI, companies should realistically assess their own readiness to do so. Thankfully, the IP Soft AI Pioneers Forum is now working to develop a universal AI maturity model that may be helpful to companies in these cases.


Focusing on machine learning 2020: augmentation instead of automation


The holy grail of augmentation can be easily seen as the pursuit of creativity but there are many other areas of interest as well. Strategic decision making, such as choosing where to build new skyscrapers, where to build new infrastructure (bridges, roads, facilities), what type of aircraft should we buy to maximize profitability and growth and what routes should we fly —counting in sustainability. These questions are still largely thought out with excel sheets, BI-tools and GIS-systems, and maybe some legacy statistics software (SAS, SPSS) with some custom analysis. While that may be sufficient for some industries, many of these problems have so many attributes that it’s impossible for us as humans to make optimal decisions — hence welcoming optimization and machine learning to help as augmenting features of decision making. And despite the fact that it’s still quite early to tell, deep learning may well be of use here



Quote for the day:

"Becoming a leader is synonymous with becoming yourself. It is precisely that simple, and it is also that difficult." -- Warren G. Bennis

Daily Tech Digest - September 01, 2018

Human intelligence and AI are vastly different — so let’s stop comparing them
Let’s start with the data part. Contrary to computers, humans are terrible at storing and processing information. For instance, you must listen to a song several times before you can memorize it. But for a computer, memorizing a song is as simple as pressing “Save” in an application or copying the file into its hard drive. Likewise, unmemorizing is hard for humans. Try as you might, you can’t forget bad memories. For a computer, it’s as easy as deleting a file. When it comes to processing data, humans are obviously inferior to AI. In all the examples iterated above, humans might be able to perform the same tasks as computers. However, in the time that it takes for a human to identify and label an image, an AI algorithm can classify one million images. The sheer processing speed of computers enable them to outpace humans at any task that involves mathematical calculations and data processing. However, humans can make abstract decisions based on instinct, common sense and scarce information. A human child learns to handle objects at a very young age. For an AI algorithm, it takes hundreds of years’ worth of training to perform the same task.



What is Industry 5.0?


The handshake between a human being and a robot symbolized of the new reality, even by knowing that it will not be the reality in the future, as most automation, machine intelligence and even robots are working in the background, supporting the workforce or taking on large portions of work, like in production and manufacturing. Investment banking systems are already in use since more than a decade to negotiate and define the share price and sell- / buy-decisions within Nano-seconds independent form any human interaction. The next wave of industrial revolution needs to define, how we collaborate and how we define the rules between human and machine interaction. When artificial intelligence is taking decisions, like we could see in an impressive example during Google I/O 2018 presented by Sundar Pichai, CEO of Google, where a voice assistant called to make an appointment and the woman answering the call didn’t have a chance to recognize, that she was speaking to a robot.


Why Cybersecurity Is Becoming A Top-Priority Investment


Using tools like Privnote is one way to securely transfer valuable data. Privnote is a platform that securely transfers data online and then self-destructs. For protecting large amounts of data, the smartest way to go about finding the right cybersecurity company is to ask around for referrals. You’re better off doing this than making a blind Google search and hoping for the best. If a cybersecurity company is good enough for your colleagues and peers, then it will likely be good enough for your business. My business develops engaging content that attracts the millennial generation, which means we launch a considerable amount of online advertising campaigns. Some of these campaigns require creating B2B accounts with other platforms, so I’m not only protecting my clients’ information, but also my own. Additionally, your product itself needs to be protected. Cyber thieves will try to steal your products’ Amazon standard identification number code and profit from your online sales.


Empowering executives with data security effectiveness evidence

Your leaders are making decisions predicated on these non-security measures every day to increase value for their shareholders, address stakeholder requirements, and mitigate business risks. Security is simply another variable in the business risk equation. In fact, your security program isn’t about security risk in and of itself, but rather, the financial, brand, and operational risk from security incidents. One area where the need for security effectiveness evidence is profusely obvious is around rationalization. For example, many auditors no longer ask, “Do you have security tools in place to mitigate risk?” because the answer is always, “Yes, but we need more tools, training, and people anyhow.” Now auditors are asking for rationalization in terms of, “Can you prove, with quantitative measures, that our security tools are adding value? And can you supply proof regarding the necessity for future security investment?”


Using Neuroscience to Make Feedback Work and Feel Better


Modern humans base their decisions on many of the same pro-social, consensus-building impulses. We make polite chitchat at work, even in our most antisocial states, so others will see us as friendly. We avoid talking to the attractive stranger at the bar because something deep and ancient in us registers the possibility of rejection as a matter of life and death. When neuroscientists conduct brain scans of people exposed to social threats, such as a nasty look or gesture, the resulting images look just like the scans of people exposed to physical threats. Our bodies react in much the same ways. Our faces flush, our hearts race, and our brains shut down. No matter if we’re giving a speech to thousands or coming face-to-face with a jungle cat, our body’s response is the same: We want out. Feedback conversations, as they exist today, activate this social threat response. In West and Thorson’s study, participants’ heart rates jumped as much as 50 percent during feedback conversations.


Big Data And ML: A Marriage Between Giants!


We live in an age where ‘information’ is packaged, shared and valued, quite literally, more than anything else! And, there is enhanced engagement in this information exchange. All this activity is resulting in tons of data being pumped out — Big Data. To those listening, this data can be harnessed and mined for answers. Whether it is regarding business profitability, marketing strategy or identifying and mitigating risk, companies can ascertain any and every detail. Aiding in these pursuits is the growing computational power of systems. There is abundant storage available for all the data. In-memory is adding to the speed of performance. Cloud and pay-as-you-go models are making engagements feasible. And, the economies of scale are making these systems highly accessible and affordable. High-tech companies, technological corporations, and data scientists, all, predict the remarkable, dominant and disruptive power of ML and Big Data combined.


Confronting the Greatest Risks To Financial Services’ Future

In a behavioral study done among international bankers, it was found that bank executives take significantly less risk when reminded of their role as bankers. In the study, they invested about 20% less in the risky asset category relative to the control group. In other words, when they were ‘acting in a ‘banker mentality’ – reminded about banking, and their bank, and their banking careers – they will be more conservative than they would otherwise be. When the same people were not reminded of their banker role, they took greater risk, indicating that the risk in banking doesn’t come from culture but from structure. The question become, is there something about the culture and structure of banks that makes bankers risk-averse? Or is this something that is just evident now? From my perspective, I have seen that “bankers being bankers” tends to result in lower acceptance of change; an adherence to legacy policies, processes, and thought patterns; and the resultant risk of not being able to keep up with consumer demands.


Thinking outside-of-the-black-box of machine learning


“Speech separation or overlapped speech recognition is paramount for far-field conversational speech recognition,”, said Yoshioka. “It has a wide range of potential applications, such as meeting assistance and medical dialog transcription. As computers begin to sense the world better and get smarter, they will be able to provide us more effective assistance and help us focus on more important things.” In the accompanying paper titled, “Layer Trajectory LSTM”, Microsoft AI researchers Jinyu Li and fellow researchers Changliang Liu and Yifan Gong, successfully reassessed the potential for innovation in traditional time-based LSTM networks. Jinyu Li described his conceptual approach saying, “Sometimes deep learning is treated as a black box and researchers just keep trying different model structures without taking a couple of steps back and thinking about why the models work – and what else might be possible.” Traditional LSTM networks in recurrent neural networks (RNNs), well-suited to classifying and making predictions based on time series data such as speech


Eclipse Releases Versions 1.4 and 2.0 of MicroProfile

Both of these Eclipse projects have merit and are making progress in their respective domains, with MicroProfile technologies building upon those being contributed to Jakarta EE. But are the projects themselves ready to be merged? IMHO, no. MicroProfile has grown tremendously from its humble beginnings. We have several new component features and versions that extend the Enterprise Java programming model for microservices development. And we have done this in a relatively short amount of time: Six major MicroProfile releases with sixteen component releases in less than two years. Due to the enormity and complexities of this move, Jakarta EE is not yet ready to match this rate of progress. And, as Jakarta EE has not yet completed the definition of its specification process, it is not yet ready to accept the fast-paced release cycle required by MicroProfile. The big difference here is that MicroProfile has never tried to be a standards body.


Think AI Is Too Scary? This Expert Wants to Calm Your Fears


The first thing to tell you is that I really see this as a listening experience, at least initially, so I can be responsive to what the community is looking for. Having said that, one big area is to enhance and strengthen AAAI links with industry. Our annual conference has a lot of participants from industry but I'd like to see more presence from industry research labs. Traditionally it's been a very academic conference but today, many professors spend time in industry. We need to give that sector a lot more presence. That's a major focus. I am also looking to include underserved communities in our membership to diversify it strongly; launch K-12 initiatives to grow the pipeline; and ensure we include professionals in other areas. ... We need to look at employing ethics within AI at every level: how systems need to be designed with different mechanisms to respond ethically to events; understand when an AI system could do harm; and so on.



Quote for the day:


"The great leaders have always stage-managed their effects." -- Charles de Gaulle


Daily Tech Digest - August 31, 2018

IoT gets smarter but still needs backend analyticsThe difference between doing analytics completely on an endpoint device or partially on a device is an important one, according to Gartner research vice president Mark Hung. At the core, the analytics done by IoT implementations is about machine learning and artificial intelligence, letting systems take data provided by smart endpoints and fashion it into actionable insights about reliability, performance, and other line-of-business information automatically. Applying the lessons learned from sophisticated ML is easy enough, even for relatively constrained devices, but some parts of the ML process are much too computationally rigorous to happen at most endpoints. This means that the endpoints themselves don’t change their instructions, but that they provide information that can be used by a more powerful back-end to customize a given IoT implementation on a per-endpoint basis. The case of video analytics for smart city applications like traffic monitoring – using a system where the cameras themselves track pedestrians and motorists, then score that data against a centrally-created AI model – is an instructive one.


The anatomy of fake news: Rise of the bots

Spreading misinformation has become a mainstream topic to the extent that even the term ‘Twitter bot’ is a well-recognised term establishing itself into the modern lexicon. Whilst the term is well known, it can be argued that the development and inner workings of Twitter bots are less well understood. Indeed, even identifying accounts that are attributed to being a bot is considerably more difficult, and with good reason since their objective to appear as legitimate interactions require constant refinement. This continuous innovation from botnet operators are necessary as social media companies get better at identifying automated accounts. A recent study conducted by SafeGuard Cyber analysed the impact and techniques leveraged by such bots, and in particular looked at bots attributed to Russian disinformation campaigns on Twitter. The concept of bot armies is challenged in the research, of the 320,000 accounts identified the bots were divided into thematic categories presenting both sides of the story.


How to retrofit the cloud for security: 2 essential steps

How to retrofit the cloud for security
Identity and access management (IAM) can be retrofitted after a cloud migration without a lot of effort. While it depends on the IAM system you use, the native IAM systems found in clouds such as Amazon Web Services and Microsoft Azure are typically both a better choice and a quicker choice. At the end of the day, of course, it’s your particular requirements that will determine your choice of IAM. Keep in mind that IAM systems depend on directory services to maintain identity and to provide the proper authorization to those identities. You must deploy one of those systems if you don’t already have one. Also, keep in mind that IAM is only of value if all applications and data are included in the system, both in the cloud and on-premises. I’m not a fan of shortcuts when it comes to cloud computing security. However, reality sometimes makes these shortcuts a necessary evil. The result is not as good as if security were integrated from the start. However, if security was not implemented, most data and applications are at risk for hackery.


Why Everyone’s Thinking About Ransomware The Wrong Way

Bad-themed crypto ransomware
If you think your IT systems are the target of ransomware, you’re not alone. But you’re also not correct. Your IT systems are just the delivery mechanism. The real target is your employees. Ransoms rely on psychological manipulation that IT systems aren’t susceptible to (AI isn’t there just yet). The systems are the prisoner being held for money. The psychology of ransomware is complex, and the two main types — locker and crypto — use different tactics and are successful within different populations of people (more on this later). It’s not just a case of getting your workforce to abide by security rules and keep their eyes open for dodgy ransom notes (this just helps prevent the data and system from becoming prisoners). You must recognize their unique psychological susceptibilities and design work practices that prevent individuals within your workforce from becoming attractive targets. As mentioned above, ransomware uses complex psychological tactics to get their targets to pay. The two main types of ransomware play off different psychological vulnerabilities.


Here's what two executive surveys revealed about blockchain adoption

blockchain code record coding
Rajesh Kandaswamy, a Gartner fellow and chief blockchain researcher, had a more sobering analysis of blockchain adoption, saying that while interest among enterprises is high, actual deployments are rare. Even when enterprises do perform proof of concept projects, they're often rolled out under pressure from executives who want to do "something" with blockchain. "Most industries are not close to adoption, and even when they do, they do limited activity to test the technology, not as much because of a strong business case," Kandaswamy said via email. A Gartner CIO survey released in May revealed that fewer than 1% of more than 3,100 respondents had rolled out production blockchain systems. Gartner has since completed a second survey whose numbers have yet to be released, but adopters remain low, Kandaswamy said. ... "The challenge for CIOs is not just finding and retaining qualified engineers, but finding enough to accommodate growth in resources as blockchain developments grow," Gartner Research vice president David Furlonger stated in the report.


Android 'API breaking' vulnerability leaks device data, allows user tracking

All versions of Android, including OS forks -- such as Amazon's Kindle FireOS -- are believed to be affected, potentially impacting millions of users. The cybersecurity firm initially reported its findings to Google in March. ... The patch was confirmed in early August, leading to the public disclosure of the vulnerability. Google has fixed the security flaw in the latest version of the Android operating system, Android P, also known as Android 9 Pie. However, the tech giant will not fix prior versions of Android as resolving the vulnerability "would be a breaking API change," according to the cybersecurity firm. Earlier this month, Google announced the launch of Android 9 Pie, which is already rolling out to Android users on some devices. Android devices manufactured by vendors including Nokia, Xiaomi, and Sony will receive the updated OS by the end of fall. The update includes new gesture navigation, themes, and adaptive settings for screen brightness and battery life, among others. Users able to upgrade to Android 9 are encouraged to do so.


Chip shrinking hits a wall -- what it means for you

Chip shrinking hits a wall -- what it means for you
“The vast majority of today’s fabless customers are looking to get more value out of each technology generation to leverage the substantial investments required to design into each technology node. Essentially, these nodes are transitioning to design platforms serving multiple waves of applications, giving each node greater longevity. This industry dynamic has resulted in fewer fabless clients designing into the outer limits of Moore’s Law,” said Thomas Caulfield, who was named CEO of GlobalFoundries last March, in a statement. Making the move to a new process node is no trivial matter. It takes billions to drop one size in process technology. What Caulfield is saying is there are fewer customers for such bleeding-edge manufacturing processes, so the return on investment isn’t there. “I think we’ve reached a change in Moore’s Law. Moore’s Law is an economic law: that we reduce the cost of transistors with each generation. We will still reduce the size of the transistor but at a slower rate,” said Jim McGregor, president of Tirias Research, who follows the semiconductor industry.


No-code and low-code tools seek ways to stand out in a crowd


A suite of prebuilt application templates aim to help users build and customize a bespoke application, such as salesforce automation, recruitment and applicant tracking, HR management and online learning. And a native mobile capability enables developers to take the apps they've built with Skuid and deploy them on mobile devices with native functionality for iOS and Android. "We're seeing a lot of folks who started in other low-code/no-code platforms move toward Skuid because of the flexibility and the ability to use it in more than one type of platform," said Ray Wang, an analyst at Constellation Research in San Francisco. "People want to be able to get to templates, reuse templates and modify templates to enable them to move very quickly." Skuid -- named for an acronym, Scalable Kit for User Interface Design -- was originally an education software provider, but users' requests to customize the software for individual workflows led to a drag-and-drop interface to configure applications.


Will Google's Titan security keys revolutionize account security?

img2713.jpg
Titan security keys use the FIDO Universal Second Factor (U2F) protocol, which relies on public key cryptography. Adding a Titan device to an account ties a public encryption key to that account, which is verified against a private key using a cryptographic signature supplied by the Titan device during login. Titan keys also protect against phishing attacks from fake login portals—even with a compromised password a Titan-enabled account is still protected. When a user logs in to a fake portal, Google said, the key will know that it isn't a legitimate website and will stop the login process immediately. Don't assume that Titan keys are only usable with Google accounts—the FIDO protocol is a popular one that works with a multitude of websites and applications. Any website that supports U2F will work with a Titan key. Titan hardware is also built to be secure—Google designed the devices around a secure element hardware chip that contains all the necessary firmware for it to function, and all of that information is sealed in during the manufacturing process, as opposed to being installed afterward.


DDD With TLC


When introducing DDD to a new team, start with bounded contexts – breaking down big problems into small, manageable, solvable problems. But leave out the terminology and just start doing it. Understanding the dynamics of a team in order to successfully coach them has a lot to do with instinct and empathy. It’s so important to listen carefully, be respectful, non-judgmental and to be kind. People resist DDD because they believe it is too much to learn or is too disruptive to their current process. Solving small problems is a good approach that can gain trust in adopting DDD. Domain modeling is an art, not a science, so it’s not uncommon to run into a wall and circle back or even have a revelation that makes you change direction. Teams benefit from encountering that with a coach who is familiar with modeling and is not worried about the perspective changing while you are going through the process.



Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks

Daily Tech Digest - August 30, 2018

Companies are not focusing enough on machine identities, says study image
We spend billions of dollars protecting usernames and passwords but almost nothing protecting the keys and certificates that machines use to identify and authenticate themselves. The number of machines on enterprise networks is skyrocketing and most organisations haven’t invested in the intelligence or automation necessary to protect these critical security assets. The bad guys know this, and they are targeting them because they are incredibly valuable assets across a wide range of cyber-attacks. According to the study, Securing The Enterprise With Machine Identity Protection: Newer technologies, such as cloud and containerisation, have expanded the definition of a machine to include a wide range of software that emulates physical machines. Furthermore, these technologies are spawning a tidal wave of new, rapidly changing machines on enterprise networks.



The Evolution of IoT Attacks


In addition to the evolution of IoT devices, there has been an evolution in the way attacker’s think and operate. The evolution of network capabilities and large-scale data tools in the cloud has helped foster the expansion of the IoT revolution. The growth of cloud and always-on availability to process IoT data has been largely adopted among manufacturing facilities, power plants, energy companies, smart buildings and other automated technologies such as those found in the automotive industry. But this has increased the attack surfaces for those that have adopted and implemented an army of possible vulnerable or already exploitable devices. The attackers are beginning to notice the growing field of vulnerabilities that contain valuable data. In a way, the evolution of IoT attacks continues to catch many off guard, particularly the explosive campaigns of IoT based attacks. For years, experts have warned about the pending problems of a connected future, with IoT botnets as a key indicator, but very little was done to prepare for it. Now, organizations are rushing to identify good traffic vs malicious traffic and are having trouble blocking these attacks since they are coming from legitimate sources.


Microservices development will fail with monolithic mindset


Effective microservices development requires organizational change that goes beyond simple, single-team DevOps, said Brian Kirsch, an IT architect and instructor at Milwaukee Area Technical College. Without an overarching DevOps infrastructure across all projects, too many enterprises have created siloed DevOps mini-teams, each producing hundreds of microservices. It's not possible to create a cohesive product when each team works independently and doesn't know what others are doing, Kirsch said. An important practice for organizations moving to microservices is to standardize development tools, frameworks and platforms. Standardization prevents overspending on tools and training and discourages expertise silos and competition for resources. In siloed development, each team in a company often uses its own preferred technology. This reduces engineering resources, because developers may lack skill sets needed to switch teams or substitute on a team using another technology, Kirsch said.


Top 9 Data Science Use Cases in Banking


Banks are obliged to collect, analyze, and store massive amounts of data. ... Nowadays, digital banking is becoming more popular and widely used. This creates terabytes of customer data, thus the first step of data scientists team is to isolate truly relevant data. After that, being armed with information about customer behaviors, interactions, and preferences, data specialists with the help of accurate machine learning models can unlock new revenue opportunities for banks by isolating and processing only this most relevant clients’ information to improve business decision-making. Risk modeling is a high priority for investment banks, as it helps to regulate financial activities and plays the most important role when pricing financial instruments. Investment banking evaluates the worth of companies to create capital in corporate financing, facilitate mergers and acquisitions, conduct corporate restructuring or reorganizations, and for investment purposes.


Improving security is top driver for ISO 27001


“Unfortunately, as long as cyber crime remains a lucrative trade, risks will continue to escalate and attackers will continue to proliferate,” said Alan Calder, founder and executive chairman of IT Governance. “To counter this, organisations need to be fully prepared. ISO 27001, an information security standard designed to minimise risks and mitigate damage, offers the preparedness that organisations need.”  Other top reasons for implementing ISO 27001 include gaining a competitive advantage (57%), ensuring legal and regulatory compliance (52%) and achieving compliance with the EU’s General Data Protection Regulation (GDPR), which was cited by 48% of respondents. According to IT Governance, ISO 27001 provides an excellent starting point for achieving the technical and operational measures required by the GDPR to help mitigate data breaches. Closely in line with the drivers for implementing ISO 27001, improved information security was by far the greatest advantage afforded by achieving certification, according to 89% of respondents.


NSX technology shifts virtual administrator responsibilities


NSX technology, and network virtualization broadly, lives at the kernel on each of the hosts. It has to exist at this level to have access to the traffic it needs without affecting the performance of the VMs. This means it's a host extension, and it falls on the virtual admin to ensure installation and functionality. After that's complete, however, the responsibilities can shift to different people. The functions of firewall and router rules haven't changed just because the environment has moved from physical to virtual, which implies these functions remain the network engineers' responsibilities. The network engineers still have relevant, specialized knowledge, but these rules are often generated automatically based on the VM deployment. Network mapping software, such as vRealize Network Insight, can offer additional complexity. Network engineers and virtual admins can both use these tools to examine the virtual network, ensure functionality and minimize risk before establishing a software-defined network.


What is CUDA? Parallel programming for GPUs

What is CUDA? Parallel programming for GPUs
Without GPUs, those training runs would have taken months rather than a week to converge. For production deployment of those TensorFlow translation models, Google used a new custom processing chip, the TPU (tensor processing unit). In addition to TensorFlow, many other DL frameworks rely on CUDA for their GPU support, including Caffe2, CNTK, Databricks, H2O.ai, Keras, MXNet, PyTorch, Theano, and Torch. In most cases they use the cuDNN library for the deep neural network computations. That library is so important to the training of the deep learning frameworks that all of the frameworks using a given version of cuDNN have essentially the same performance numbers for equivalent use cases. When CUDA and cuDNN improve from version to version, all of the deep learning frameworks that update to the new version see the performance gains. Where the performance tends to differ from framework to framework is in how well they scale to multiple GPUs and multiple nodes.



NASA to use data lasers to beam data from space to Earth

NASA to use data lasers to beam data from space to Earth
Laser is not as easy as radio, though, NASA explains. That’s partly because the Earth’s rotation, coupled with the amount of time it takes data to reach the ground station from the spacecraft — albeit faster than radio — means tricky timing calculations are needed to determine where the narrower laser needs to hit. Traditional radio simply needs a data dump, from space, in the vicinity of the ground receiver, whereas laser needs to be continually connected during the transmission. The agency intends to employ a special locking, pointing mechanism. The idea is that a pre-scheduled passing craft’s telescope picks up a finder-signal sent from the ground station. That allows the transmitter to lock on. Mirrors in the spacecraft’s laser modulator are driven by sensors, and they send the beam. Using the LCRD, NASA is aiming for a 1.24 Gigabits per second, geosynchronous-to-ground optical link with two ground stations. The first flight, run by NASA's Goddard Space Flight Center in Greenbelt, Maryland, is expected to take place next year.


Want a CIO role? Here are the top skills you need and how to get there

While technical skills are more critical, that doesn't necessarily mean executive teams are looking for a former programmer or network engineer to fill their CIO role. A CIO must appreciate the balance between the hype/promise of new technologies and the reality of business, Inuganti said. Despite the need for technical skills, making a jump into leadership at the CIO level requires a deep understanding of the business. CIO candidates must understand the metrics that drive the business, what competitors are doing, and more, Inuganti said. The market previously went too far to the business side of things, but with the growth of cloud, big data, artificial intelligence (AI) and other technologies, it is requiring more technical skills. In terms of what skills are currently hot, data was always there, Inuganti said, but skills around data analysis are growing in desirability for CIOs. He said it's the hottest commodity in the market today, based on what he has seen with executive searches.


Inside the world's most prolific mobile banking malware

The malware's ability to read messages also means it can intercept text messages from the bank containing one-time passwords, helping the attackers to steal from accounts that use additional security. In addition, Asacub ensures the user can't check their mobile banking balance or change any settings because the permissions it has been given enables it to prevent the legitimate banking app from running on the phone. The attacks might seem basic, but they still work, and Kaspersky figures say Asacub currently accounts for 38 percent of mobile banking trojan attacks "The example of the Asacub Trojan shows us that mobile malware can function for several years with minimal changes in its distribution pattern," Shishkova told ZDNet. "One of the main reasons for this is that the human factor can be leveraged through social engineering: SMS-messages look like they are meant for a certain user, so victims unconsciously click on fraudulent links. In addition, with regular change of domains from which the Trojan is distributed, catching it requires heuristic methods of detection," she added.



Quote for the day:


"The People That Follow You Are A Reflection Of Your Leadership." -- Gordon TredGold

Daily Tech Digest - August 29, 2018

women in it programmer devops reflection monitor glasses by angelos michalopoulos unsplash
More than often, the network will be a multi-vendor, consisting of numerous domains with operational and architectural teams operating in silo. Computer networks are complex, and this complexity can be ’managed out’ by introducing a framework that abstracts the complexity. Lowering complexity and getting things right in a standardized way introduces you to the world of automation. Within a network, there are elements that used to be either easy or hard to automate. Here, one should not assume that if something is easy to automate, we should immediately dive in without considering the easy-versus-impact ratio. Operating system upgrades are easy to automate but have a large impact if something goes wrong. No one wants to live in a world of VLANs and ports. Realistically, they have a relatively low impact with a very basic configuration that needs to be on every switch. This type of device level automation is an easy ramp to automation as long as it does not touch any services.


The bright future of machine learning

Man and Machine collaboration
Mind+machine partnerships are useful in any situation where you’re dealing with a large amount of data, constrained time, or the need for continuous coaching or training. Using machine partners can help us make better choices, decisions and products, with a key advantage being that machines can work on demand, while another human might not be able to learn or respond as quickly. An example I use in class is the use of mind+machine to improve food safety at restaurants. Computers use pre-existing historical data to predict when to inspect restaurants. The computer can identify a restaurant that’s likely to have a violation, and then human inspectors can follow up. And this partnership has done a much better job — inspectors were getting to restaurants that had critical violations about a week earlier than they would have if they had just gone with the normally generated schedule.


This Entrepreneur Shares The Focus Strategy That Helped Him Build an App Used by Millions

Moving, with all the attendant logistical headaches and emotional investment, can be one of the most stressful things you can experience. When your living situation is in flux, it can affect every aspect of your life. And that’s before you factor in trying to find a stranger, or an untested friend or acquaintance, to split the costs with you. This is the problem that Ajay Yadav wants to solve with his company Roomi. He founded the startup in 2015 to help people looking for roommates connect with people who actually are who they say are. Users of Roomi sign up for the service by completing a background check that includes ID verification and social media accounts. If the prospective roommates think they have a match, they can plan to meet through a secure in-app messaging platform. Since launching in New York City three years ago, the company now operates in more than 20 cities, acquired four companies, raised $17 million in funding and has a user base of 2.4 million.


Stop Talking Gobbledygook to the Business

Image: Pixabay
For explaining or defending a machine’s good decisions and fixing the bad ones, you’ll want to be able to see scored machine learning output for each record combined with the top variable value details in plain business language that most influenced the predicted outcome. For example: Record ID 232333 was predicted to be a high value customer because of size greater than 10,000 employees, monthly spend between $1M and $1.5M, and so on for relevant decision influencing input variables. To start earning stakeholder trust early on in your machine learning projects, share intermediate reports such as top outcome influencers that can be invaluable to the line of business. Machine learning can rapidly narrow down the scope of potential variables that matter most when faced with hundreds or thousands of variables to analyze.  As you create models, visually share progress and insights on where your model is accurate and where it makes mistakes using scatter plots, combination charts and interactive data visualization tools.


Dell Latitude 7490 review: A solid business all-rounder

There is a good range of ports and connectors, including a Smart Card reader, NFC sensor and fingerprint reader -- the latter two located on the wrist rest. The Smart Card reader sits on the front left edge, where there are also two USB 3.1 ports, a full size HDMI connector and a USB Type-C port with DisplayPort and Thunderbolt 3. The large, round power jack is at the back of this edge. Meanwhile the right edge offers a headset jack, a MicroSD card reader, a SIM slot, a third USB 3.1 port and an Ethernet port with a spring-out base that means it can be accommodated easily in the chassis. The pop-out SIM card caddy is perhaps a little vulnerable, though it's about as invisible as it could be, nestled at the bottom of the right edge. It accommodates a Micro-SIM rather than a Nano-SIM. It's nice to see a MicroSD card slot here, although full-size SD would be welcome too. My review sample performed well. Simultaneous writing into a web app, audio streaming and 20-plus Chrome tabs opened across two application windows presented it with no difficulties at all.


The GDPR And The B2B Seller: Keep Calm And Sell On


B2B sellers are struggling to engage empowered B2B buyers — those traveling on self-directed journeys — who are raising the bar for more insight, more co-creation, and more creativity. Piled on top of these challenges, GDPR seems like the whim of a capricious god in a cosmic smackdown, throwing more obstacles in the way of sales representatives. However, many of the identified seller pain points are actually ineffective tactics — a vestige of a bygone era — that are off-putting to customers and prospects who are fed up with a barrage of impersonal, non-purposeful, and irrelevant communications. GDPR prohibits selling methods that leverage nonconsensual use of personal data, and this new reality will ultimately be good for sellers willing to shift their behaviors. Sales will spend less time doing data entry and sending automatic emails and more time focusing on how they can help interested customers. One of the sales leaders we interviewed shared this sentiment: “For sales representatives to stay relevant, they need to stop automating things. This is just the tip of the iceberg for sales and marketing teams becoming more human.”


Are AI and “deep learning” the future of, well, everything?

Machine learning and deep learning have grown from the same roots within computer science, using many of the same concepts and techniques. Simply put, machine learning is an offshoot of artificial intelligence that enables a system to acquire knowledge through a supervised learning experience. It’s a straightforward enough process, in theory: a human being provides data for analysis, and then gives error-correcting feedback that enables the system to improve itself. Depending upon the patterns in the data it’s exposed to, and which of those it recognises, the system will adjust its actions accordingly. It's this ability to self-develop without the need for explicit programming, but rather to change and adapt when exposed to new data, that makes machine learning such a powerful tool. However, what makes deep learning even more valuable is that it does so without, or with much less, human supervision. David Wood, co-founder of Symbian and now a “futurist” at Delta Wisdom, explains the difference using the example of face recognition.


5 ways the World Economic Forum says AI is changing banking


“As products and services become more easily comparable and therefore commoditized, it’s not sufficient any more to compete on delivering credit quickly and at a good price, which have been the historic competitive levers” for banks, said Rob Galaski, Deloitte Global Banking and Capital Marketing Consulting leader and one of the authors of the report. For example, to keep its auto loan business relevant, Royal Bank of Canada is piloting a forecasting tool for car dealers to predict demand for vehicle purchases based on customer data. Such information could be more valuable to the dealers than any banking product, Galaski said. “We think that is an exemplar of how we see the industry changing overall,” he said. “Much of the AI debate coming into our work was around replacing humans and doing existing things better or faster. But that take on AI dramatically underestimates the impact. The very way we go about conducting business can be redesigned using AI.”


Excess data center heat is no longer a bug -- it’s a feature!

green data center intro
Developed by MIRIS in cooperation with architecture firm Snøhetta, Skanska, Asplan Viak, and Nokia, The Spark also requires urban data centers to be built in close proximity to the buildings hoping to use the excess heat. These kinds of urban locations may further increase costs and put practical limitations on the size of data centers that can take advantage of the concept. While smaller data centers are increasingly popular, they may not be able to achieve the economies of scale enjoyed by the largest facilities, which can run into the millions of square feet. In addition, depending on the time of day, the weather, and other factors, the heat generated by the data center may not always precisely match the needs of the surrounding community, either generating more heat than the local homes and businesses need, or requiring them to get additional heat from other sources. That’s why the Lyseparken implementation includes a stake in the local power company, Fast Company said, and will “produce and consume electricity from a mix of renewable sources, including solar and thermal energy. 


EU regulation will drive U.S. banks to embrace FinTech or lose market share

"We are already seeing the U.K.'s open banking initiative, which is based on but wider than PSD2, being explored in other markets, including in Central America, Asia and Africa. So it wouldn't be surprising to see similar developments in the U.S.," he added. Even before being pressured by PSD2, some European banks were embracing emerging digital technologies, such as real-time electronic payments; they often gained the technology either through partnerships with FinTechs or by acquiring them outright. "U.K. banks are not, at this stage, seeing FinTechs so much as competitors as they are seeing them as potential collaborators with whom they can develop new journeys, services and products," Chertkow said. "What is clear is that consumer behaviors are changing, particularly with younger generations. Traditional banks need to decide whether they want to maintain their existing business model and seek to differentiate it from the FinTechs or whether they need to respond by copying the best user experiences of the FinTechs."



Quote for the day:


"Commitment is the conviction that it's right to fight for what you want." --Tim Fargo