Showing posts with label wearables. Show all posts
Showing posts with label wearables. Show all posts

Daily Tech Digest - May 10, 2024

Optimize AI at Scale With Platform Engineering for MLOps

Just as platform engineering emerged from the DevOps movement to streamline app development workflows, so too must platform engineering streamline the workflows of MLOps. To achieve this, one must first recognize the fundamental differences between DevOps and MLOps. Only then can one produce an effective platform engineering solution for ML engineers. To enable AI at scale, enterprises must commit to developing, deploying and maintaining platform engineering solutions that are purpose-built for MLOps. Whether due to data governance requirements or practical concerns about moving vast volumes of data over significant geographical distances, MLOps at scale require enterprises to utilize a spoke-and-wheel approach. Model development and training occurs centrally, trained models are distributed to edge locations for fine-tuning on local data, and fined-tuned models are deployed close to where end users interact with them and the AI applications they leverage. ... Enterprises should hire engineers with MLOps experience to fill platform engineering roles appropriately. According to research from the World Economic Forum, AI is projected to create around 97 million new jobs by 2025. 


The Blockchain Integrity Act: Latest Attempt to Restrict Financial Privacy

In short, the Blockchain Integrity Act would first establish a two‐​year moratorium that prohibits financial institutions from going anywhere near cryptocurrency that has been routed through a mixer. With that two‐​year moratorium in place, the Blockchain Integrity Act would then require the Department of the Treasury to study how people use mixers and other privacy‐​enhancing technology. ... The second half of the legislation—the request for a study—is less concerning if it’s considered alone and without the surrounding context. The request seeks information regarding different types of privacy‐​enhancing technology, illicit and legitimate use history, and an analysis of what the government’s role might be here. Those are all reasonable inquiries. Again, without additional context, it’s an encouraging sign that Representative Casten is interested in learning more about how this technology is used for both better and worse. Yet what isn’t encouraging is that Representative Casten introduced the bill saying that “until we’ve studied [privacy enhancing technologies like mixers] and have a good audit trail, the presumption should be that these are money laundering channels.”


Some strategies for CISOs freaked out by the specter of federal indictments

“Some CISOs feel like they’re the frog that’s in the water that’s starting to boil, and they don’t like that feeling, and they want to make sure that they’re doing the right things to navigate that heat,” Sullivan said during a panel discussion, “CISOs Under Indictment: Case Studies, Lessons Learned, and What’s Next,” at this year’s RSA Conference. The panel of current and former CISOs emphasized that in this environment, CISOs need to document their roles and responsibilities, involve the right people in incident response and decision-making processes, and have the courage to stand up for their convictions to minimize the risk that they will face the same fates as Sullivan and Brown. ... “The heat is up because the reality is you’ve got these entities in government who are responding to a huge rise in cybercrime in a way that no one can hide. It’s not like in the old days when if an incident happened, most people wouldn’t notice when stuff happens. Today, the whole world notices,” he said. Blauner’s bottom-line advice to CISOs to protect themselves is to “take a look at every governance document you’ve got and really make sure that it’s crystal clear about roles and responsibilities, especially around who makes risk management decisions.”


Wearable devices can now harvest our brain data. Australia needs urgent privacy reforms

In a background paper published earlier this year, the Australian Human Rights Commission identified several risks to human rights that neurotechnology may pose, including rights to privacy and non-discrimination. Legal scholars, policymakers, lawmakers and the public need to pay serious attention to the issue. The extent to which tech companies can harvest cognitive and neural data is particularly concerning when that data comes from children. This is because children fall outside of the protection provided by Australia’s privacy legislation, as it doesn’t specify an age when a person can make their own privacy decisions. The government and relevant industry associations should conduct a candid inquiry to investigate the extent to which neurotechnology companies collect and retain this data from children in Australia. The private data collected through such devices is also increasingly fed into AI algorithms, raising additional concerns. These algorithms rely on machine learning, which can manipulate datasets in ways unlikely to align with any consent given by a user.


Cloud environments beyond the Big Three

The resurgence and innovation in edge computing and on-premises technology further support the trend toward diversification as data generation and consumption locations continue to spread geographically. ... Edge computing addresses these limitations by processing data closer to where it is generated. This drastically reduces latency and enhances the user experience in applications such as IoT, retail tech, and smart manufacturing. Although many consider edge computing to be small devices, it also includes entire data centers and smaller server installations that exist to serve a specific business location. Many enterprises don’t see the wisdom of sending their data on a 2,000-mile round trip to the point of presence for a public cloud provider, which happens more often than we understand. Additionally, although the cloud offers good scalability and flexibility, concerns over data sovereignty and security continue to push certain industries towards on-premises solutions. Sensitive data and critical applications in sectors such as finance, government, and healthcare often necessitate keeping data in-house under strict regulatory frameworks.


Controlling chaos using edge computing hardware: Digital twin models promise advances in computing

Using machine learning tools to create a digital twin (a virtual copy) of an electronic circuit that exhibits chaotic behavior, researchers found that they were successful at predicting how it would behave and at using that information to control it. Many everyday devices, like thermostats and cruise control, utilize linear controllers—which use simple rules to direct a system to a desired value. Thermostats, for example, employ such rules to determine how much to heat or cool a space based on the difference between the current and desired temperatures. Yet because of how straightforward these algorithms are, they struggle to control systems that display complex behavior, like chaos. As a result, advanced devices like self-driving cars and aircraft often rely on machine learning-based controllers, which use intricate networks to learn the optimal control algorithm needed to operate efficiently. However, these algorithms have significant drawbacks, the most demanding of which is that they can be extremely challenging and computationally expensive to implement.


Digital recreations of dead people need urgent regulation, AI ethicists say

Such services, which are already technically possible to create and legally permissible, could let users upload their conversations with dead relatives to “bring grandma back to life” in the form of a chatbot, researchers from the University of Cambridge suggest. They may be marketed at parents with terminal diseases who want to leave something behind for their child to interact with, or simply sold to still-healthy people who want to catalogue their entire life and create an interactive legacy. But in each case, unscrupulous companies and thoughtless business practices could cause lasting psychological harm and fundamentally disrespect the rights of the deceased, the paper argues. “Rapid advancements in generative AI mean that nearly anyone with internet access and some basic knowhow can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-BasiƄska, one of the study’s co-authors at Cambridge’s Leverhulme centre for the future of intelligence (LCFI). “This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example.”


How To Take The A-I-M Approach To Leadership

I like to break down the concept of taking aim into three components, which I call the A-I-M approach: appreciation, imagination and motivation. The common thread across all three of these principles is communication—and leaders cannot be effective without it. Showing genuine gratitude is a foundational aspect of effective leadership. Expressing heartfelt encouragement demonstrates empathy and humility. And this simple show of appreciation directly benefits the organization by motivating employees to continue contributing to the company’s success and nurturing their loyalty. ... A leader’s job is not to be the author of all ideas but to inspire team members to tap into their imaginations and present fresh approaches to solving problems, delivering solutions and communicating with clients.  ... One of the responsibilities of a leader is to understand what moves their teams into action. As author and leadership coach John Maxwell famously wrote, “A leader is great not because of his or her power, but because of his or her ability to empower others.” I call that motivation.


Colorado AI legislation further complicates compliance equation

CIOs might struggle with the bill’s language because the focus is on whether AI — in any form — helps make “consequential decisions” that could impact Colorado residents. The bill defines consequential decision as being any decision “that has a material legal or similarly significant effect on the provision or denial to any consumer,” which includes educational enrollment, employment or employment opportunity, financial or lending service, healthcare services, housing, insurance, or a legal service. ... Another provision could prove onerous for CIOs who do not have full knowledge of every AI implementation in use in their environment, as it requires companies to make “a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems and the nature, source, and extent of the information collected and used.” ... One especially dicey area in the legislation that should concern CIOs is when AI — especially generative AI — acts on its own. 


AI's Game-Changing Role in Finance and Audit Processes

Auditors can face several risks when using AI. These risks include over-reliance on AI-generated insights, potential biases and quality issues from incomplete or poor-quality data andcybersecurity threats such as consequences in terms of hacking of the confidential data from the AI websites. Thus, it is necessary to ensure compliance and implement safeguarding measures. Following are some of the possible measures that can be implemented to mitigate the above-mentioned risks. Human judgement: While AI is a great tool to be incorporated in the professional world to help auditors and organisations streamline their existing processes, AI work on standard algorithm that can’t be customised on case-to-case basis. Therefore, to ensure the accuracy of the results, a human review can be placed in practice to review from and validate the accuracy of output results. Updating back-end algorithms: The better the algorithms, the better the results. Regular updates to the back-end algorithms can yield more accurate and improved outputs, adapting to changing scenarios and data formats, ultimately mitigating the risk of incorrect or inaccurate results..



Quote for the day:

"Don't find fault, find a remedy." -- Henry Ford

Daily Tech Digest - March 18, 2024

Generative AI will turn cybercriminals into better con artists. AI will help attackers to craft well-written, convincing phishing emails and websites in different languages, enabling them to widen the nets of their campaigns across locales. We expect to see the quality of social engineering attacks improve, making lures more difficult for targets and security teams to spot. As a result, we may see an increase in the risks and harms associated with social engineering – from fraud to network intrusions. ... AI is driving the democratisation of technology by helping less skilled users to carry out more complex tasks more efficiently. But while AI improves organisations’ defensive capabilities, it also has the potential for helping malicious actors carry out attacks against lower system layers, namely firmware and hardware, where attack efforts have been on the rise in recent years. Historically, such attacks required extensive technical expertise, but AI is beginning to show promise to lower these barriers. This could lead to more efforts to exploit systems at the lower level, giving attackers a foothold below the operating system and the industry’s best software security defences.


Get the Value Out of Your Data

A robust data strategy should have clearly defined outcomes and measurements in place to trace the value it delivers. However, it is important to acknowledge the need for flexibility during the strategic and operational phases. Consequently, defining deliverables becomes crucial to ensure transparency in the delivery process. To achieve this, adopting a data product approach focused on iteratively delivering value to your organization is recommended. The evolution of DevOps, supported by cloud platform technology, has significantly improved the software engineering delivery process by automating development and operational routines. Now, we are witnessing a similar agile evolution in the data management area with the emergence of DataOps. DataOps aims to enhance the speed and quality of data delivery, foster collaboration between IT and business teams, and reduce the associated time and costs. By providing a unified view of data across the organization, DataOps enables faster and more confident data-driven decision-making, ensuring data accuracy, up-to-datedness, and security. It automates and brings transparency to the measurements required for agile delivery through data product management.


Exposure to new workplace technologies linked to lower quality of life

Part of the problem is that IT workers need to stay updated with the newest tech trends and figure out how to use them at work, said Ryan Smith, founder of the tech firm QFunction, also unconnected with the study. The hard part is that new tech keeps coming in, and workers have to learn it, set it up, and help others use it quickly, he said. “With the rise of AI and machine learning and the uncertainty around it, being asked to come up to speed with it and how to best utilize it so quickly, all while having to support your other numerous IT tasks, is exhausting,” he added. “On top of this, the constant fear of layoffs in the job market forces IT workers to keep up with the latest technology trends in order to stay employable, which can negatively affect their quality of life.” ... “As IT has become the backbone of many businesses, that backbone is key to the businesses operations, and in most cases revenue,” he added. “That means it’s key to the business’s survival. IT teams now must be accessible 24 hours a day. In the face of a problem, they are expected to work 24 hours a day to resolve it. ...”


6 best operating systems for Raspberry Pi 5

Even though it has been nearly seven years since Microsoft debuted Windows on Arm, there has been a noticeable lack of ARM-powered laptops. The situation is even worse for SBCs like the Raspberry Pi, which aren’t even on Microsoft’s radar. Luckily, the talented team at WoR project managed to find a way to install Windows 11 on Raspberry Pi boards. ... Finally, we have the Raspberry Pi OS, which has been developed specifically for the RPi boards. Since its debut in 2012, the Raspberry Pi OS (formerly Raspbian) has become the operating system of choice for many RPi board users. Since it was hand-crafted for the Raspberry Pi SBCs, it’s faster than Ubuntu and light years ahead of Windows 11 in terms of performance. Moreover, most projects tend to favor Raspberry Pi OS over the alternatives. So, it’s possible to run into compatibility and stability issues if you attempt to use any other operating system when attempting to replicate the projects created by the lively Raspberry Pi community. You won’t be disappointed with the Raspberry Pi OS if you prefer a more minimalist UI. That said, despite including pretty much everything you need to use to make the most of your RPi SBC, the Raspberry Pi OS isn't as user-friendly as Ubuntu.


Speaking without vocal cords, thanks to a new AI-assisted wearable device

The breakthrough is the latest in Chen's efforts to help those with disabilities. His team previously developed a wearable glove capable of translating American Sign Language into English speech in real time to help users of ASL communicate with those who don't know how to sign. The tiny new patch-like device is made up of two components. One, a self-powered sensing component, detects and converts signals generated by muscle movements into high-fidelity, analyzable electrical signals; these electrical signals are then translated into speech signals using a machine-learning algorithm. The other, an actuation component, turns those speech signals into the desired voice expression. The two components each contain two layers: a layer of biocompatible silicone compound polydimethylsiloxane, or PDMS, with elastic properties, and a magnetic induction layer made of copper induction coils. Sandwiched between the two components is a fifth layer containing PDMS mixed with micromagnets, which generates a magnetic field. Utilizing a soft magnetoelastic sensing mechanism developed by Chen's team in 2021, the device is capable of detecting changes in the magnetic field when it is altered as a result of mechanical forces—in this case, the movement of laryngeal muscles.


We can’t close the digital divide alone, says Cisco HR head as she discusses growth initiatives

At Cisco, we follow a strengths-based approach to learning and development, wherein our quarterly development discussions extend beyond performance evaluations to uplifting ourselves and our teams. We understand that a one-size-fits-all approach is inadequate. To best play to our employees' strengths, we have to be flexible, adaptable, and open to what works best for each individual and team. This enables us to understand individual employees' unique learning needs, enabling us to tailor personalised programs that encompass diverse learning options such as online courses, workshops, mentoring, and gamified experiences, catering to diverse learning styles. As a result, our employees are energized to pursue their passions, contributing their best selves to the workplace. Measuring the quality of work, internal movements, employee retention, patents, and innovation, along with engagement pulse assessments, allows us to gauge the effectiveness of our programs. When it comes to addressing the challenge of retaining talent, it's essential for HR leaders to consider a holistic approach. 


Vector databases: Shiny object syndrome and the case of a missing unicorn

What’s up with vector databases, anyway? They’re all about information retrieval, but let’s be real, that’s nothing new, even though it may feel like it with all the hype around it. We’ve got SQL databases, NoSQL databases, full-text search apps and vector libraries already tackling that job. Sure, vector databases offer semantic retrieval, which is great, but SQL databases like Singlestore and Postgres (with the pgvector extension) can handle semantic retrieval too, all while providing standard DB features like ACID. Full-text search applications like Apache Solr, Elasticsearch and OpenSearch also rock the vector search scene, along with search products like Coveo, and bring some serious text-processing capabilities for hybrid searching. But here’s the thing about vector databases: They’re kind of stuck in the middle. ... It wasn’t that early either — Weaviate, Vespa and Mivlus were already around with their vector DB offerings, and Elasticsearch, OpenSearch and Solr were ready around the same time. When technology isn’t your differentiator, opt for hype. Pinecone’s $100 million Series B funding was led by Andreessen Horowitz, which in many ways is living by the playbook it created for the boom times in tech.


The Role of Quantum Computing in Data Science

Despite its potential, the transition to quantum computing presents several significant challenges to overcome. Quantum computers are highly sensitive to their environment, with qubit states easily disturbed by external influences – a problem known as quantum decoherence. This sensitivity requires that quantum computers be kept in highly controlled conditions, which can be expensive and technologically demanding. Moreover, concerns about the future cost implications of quantum computing on software and services are emerging. Ultimately, the prices will be sky-high, and we might be forced to search for AWS alternatives, especially if they raise their prices due to the introduction of quantum features, as it’s the case with Microsoft banking everything on AI. This raises the question of how quantum computing will alter the prices and features of both consumer and enterprise software and services, further highlighting the need for a careful balance between innovation and accessibility. There’s also a steep learning curve for data scientists to adapt to quantum computing.


AI-Driven API and Microservice Architecture Design for Cloud

Implementing AI-based continuous optimization for APIs and microservices in Azure involves using artificial intelligence to dynamically improve performance, efficiency, and user experience over time. Here's how you can achieve continuous optimization with AI in Azure:Performance monitoring: Implement AI-powered monitoring tools to continuously track key performance metrics such as response times, error rates, and resource utilization for APIs and microservices in real time. Automated tuning: Utilize machine learning algorithms to analyze performance data and automatically adjust configuration settings, such as resource allocation, caching strategies, or database queries, to optimize performance. Dynamic scaling: Leverage AI-driven scaling mechanisms to adjust the number of instances hosting APIs and microservices based on real-time demand and predicted workload trends, ensuring efficient resource allocation and responsiveness. Cost optimization: Use AI algorithms to analyze cost patterns and resource utilization data to identify opportunities for cost savings, such as optimizing resource allocation, implementing serverless architectures, or leveraging reserved instances.


4 ways AI is contributing to bias in the workplace

Generative AI tools are often used to screen and rank candidates, create resumes and cover letters, and summarize several files simultaneously. But AIs are only as good as the data they're trained on. GPT-3.5 was trained on massive amounts of widely available information online, including books, articles, and social media. Access to this online data will inevitably reflect societal inequities and historical biases, as shown in the training data, which the AI bot inherits and replicates to some degree. No one using AI should assume these tools are inherently objective because they're trained on large amounts of data from different sources. While generative AI bots can be useful, we should not underestimate the risk of bias in an automated hiring process -- and that reality is crucial for recruiters, HR professionals, and managers. Another study found racial bias is present in facial-recognition technologies that show lower accuracy rates for dark-skinned individuals. Something as simple as data for demographic distributions in ZIP codes being used to train AI models, for example, can result in decisions that disproportionately affect people from certain racial backgrounds.



Quote for the day:

"The most common way people give up their power is by thinking they don't have any." -- Alice Walker

Daily Tech Digest - August 03, 2023

When your teammate is a machine: 8 questions CISOs should be asking about AI

There are many potential benefits that can flow from incorporating AI into security technology, according to Rebecca Herold, an IEEE member and founder of The Privacy Professor consultancy: streamlining work to shorten finish times for projects, the ability to make quick decisions, to find problems more expeditiously. But, she adds, there are a lot of half-baked instances being employed and buyers "end up diving into the deep end of the AI pool without doing one iota of scrutiny about whether or not the AI they view as the HAL 9000 savior of their business even works as promised." She also warns that when "flawed AI results go very wrong, causing privacy breaches, bias, security incidents, and noncompliance fines, those using the AI suddenly realize that this AI was more like the dark side of HAL 9000 than they had even considered as being a possibility." To avoid having your AI teammate tell you, "I'm sorry, Dave, I'm afraid I can't do that," when you are asking for results that are accurate, non-biased, privacy-protective, and in compliance with data protection requirements, Herold advises that every CISO ask eight questions


Generative AI needs humans in the loop for widespread adoption

Generative AI by itself has many positives, but it is currently a work in progress and it will need to work with humans for it to transform the world - which it is almost certain to do. This blending of man and machine is best described as “AI with humans in the loop” and it is already being widely adopted by businesses who want to cut operating costs and improve customer services, but also realise that humans will be crucial if these objectives are to be achieved. One of the sectors embracing this new normal is in financial journalism. Reuters managing director Sue Brooks announced that AI will be used to cover news stories and will create a “golden age” of news. Crucially, she also said it was vital there “was always a human in the loop to ensure total accuracy”. Reuters content now has automated time-coded transcripts and translation of many languages into English, part of the Reuters Connect service. Brooks went on to say that this meld would “free up brain power to be creative and put all these tools in your toolbox to create magical experiences for readers”.


AI chip adds artificial neurons to resistive RAM for use in wearables, drones

According to Weier Wan, a graduate researcher at Stanford University and one of the authors of the paper, published in Nature yesterday, NeuRRAM has been developed as an AI chip that greatly improves energy efficiency of AI inference, thereby allowing complex AI functions to be realized directly within battery-powered edge devices, such as smart wearables, drones, and industrial IoT sensors. "In today's AI chips, data processing and data storage happen in separate places – computing unit and memory unit. The frequent data movement between these units consumes the most energy and becomes the bottleneck for realizing low-power AI processors for edge devices," he said. To address this, the NeuRRAM chip implements a "compute-in-memory" model, where processing happens directly within memory. It also makes use of resistive RAM (RRAM), a memory type that is as fast as static RAM but is non-volatile, allowing it to store AI model weights. 


The CISO role has changed, and CISOs need to change with it

Perhaps the best way to improve security—and make the CISO’s job a little easier—is not reliant on technology. A change in culture is the best way to truly create an organization where security is top of mind. CISOs, part of upper management, but also part of the security team, are uniquely positioned to lead this change – both with other leaders and those they lead. A security-first culture requires embedding security in everything a business does. Developers should be enabled to create secure code that is free from vulnerabilities and resistant to attacks as soon as it is written, as opposed to being a consideration much later in the SDLC. Designated security champions from the developer ranks should lead this charge, acting as both coach and cheerleader. This approach means that security is not being mandated from above, but part of the team’s DNA and backed up by management. This cannot be an overnight change, and may be met with resistance. But the threat landscape is too complex, too advanced and too ubiquitous for any one person or even a small team to handle alone.


Hosting Provider Accused of Facilitating Nation-State Hacks

The allegations, whether true or not, are a reminder that cybercrime doesn't operate in a vacuum. Rather, there's a burgeoning service and support ecosystem. Services include initial access brokers who provide on-demand access to victims, botnet owners who facilitate malware-laden phishing attacks, and repacking services that make malware tougher to spot. They also include ransomware-as-a-service operators who lease their code to business partners, the affiliates who use it to infect victims, and cryptocurrency money laundering services that help criminals - operating online or off - convert their ill-gotten gains into cash. Online attackers require infrastructure for launching their attacks. Some make use of bulletproof service providers, which provide VPS and other types of hosting services in return for a promise, typically for a relatively high fee, that customers can do whatever they like. Halcyon's report alleges that Cloudzy functionally operates in a similar manner, due to a lack of proper oversight, including allowing cryptocurrency-using customers to be able to remain anonymous.


The tug-of-war between optimization and innovation in the CIO’s office

The downside of prioritizing optimization is the risk of overlooking opportunities for innovation that could have long-term impacts on the organization’s growth and relevance. Think game-changing new systems, such as AI, that increase supply chain efficiency, or automating steps in manufacturing that speeds up productivity and reduces costs at the same time. Usually, the value of a business is directly defined by the innovations that can drive it. Think about the services we use now, from food delivery to home sharing, with the draw being better customer experiences through innovation. Emphasizing innovation enables companies to stay ahead of the curve, attracting customers with cutting-edge products and services. ... These mistakes will kill a company. Taking resources away from innovation and spending them on making things work as they should removes business value. I think we’re going to see a great many businesses spend so much money to fix past mistakes that they’ll end up throwing in the towel. 


Flight to cloud drives IaaS networking adoption

IDC describes IaaS cloud networking as a foundational networking layer that allows large enterprises and technology providers to connect data centers, colocation environments, and cloud infrastructure. With IaaS networking, the network infrastructure and services are scalable and available on-demand, provisioned and consumed just like any other cloud service. That makes this infrastructure more scalable and agile than traditional approaches to networking, according to IDC. Direct cloud connects/interconnects is the largest segment of IaaS networking, accounting for more than half of all IaaS networking revenue. The four other major segments of the IaaS networking market are cloud WAN (transit), IaaS load balancing, IaaS service mesh, and cloud VPNs (to IaaS clouds), according to IDC. Cloud WAN, which includes cloud middle-mile and core transit networks, is the fastest-growing segment of IaaS networking, with a forecasted five-year compound annual growth rate of 112%, says IDC. IaaS service meshes are also expected to see strong growth, with a forecasted five-year compound annual growth rate of 68%.


The rise of Generative AI in software development

AI is accelerating the process of going from zero to one – it jumpstarts innovation, releasing developers from the need to start from scratch. But the 1 to n problem remains – they start faster but will quickly have to deal with issues like security, governance, code quality, and managing the entire application lifecycle. The largest cost of an application isn't creating it – it's maintaining it, adapting it, and ensuring it will last. And if organisations were already struggling with tech debt (code left behind by developers who quit, vendors who sunset apps and create monstrous workloads to take care of) now they'll also have to handle massive amounts of AI-generated code that their developers may or may not understand. As tempting as it may be for CIOs to assume they can train teams on how to prompt AI and use it to get any answers they need, it might be more efficient to invest in technologies that help you leverage Gen AI in ways that you can actually see, control and trust. This is why I believe that in the future, fundamentally, everything will be delivered on top of AI-powered low-code platforms. 


Will law firms fully embrace generative AI? The jury is out | The AI Beat

On one hand, gen AI is shaking up the legal industry, with companies like Everlaw adding options to their product portfolio, while Thomson Reuters can integrate with Microsoft 365 Copilot to power legal content generation directly in Word. On the other hand, lawyers tend to be a conservative bunch — and in this case, attorneys would likely be wise to be cautious, with headlines like “New York lawyers sanctioned for using fake ChatGPT cases in legal brief” going viral. Another problem is that their clients may not feel comfortable with law firms using gen AI — a new survey found that one-third of consumer respondents said they’re against any use of gen AI in the legal field. ... But with Everlaw’s new gen AI now available in beta, lawyers can go beyond just clustering data at the aggregate level to querying, summarizing and otherwise extracting details from documents to get what they need. For example, the company says that while it typically takes hours for a legal professional to compose a statement of facts, it can now happen in about 10 seconds, delivering legal teams a rough draft to edit and fact check. 


Vulnerability Management: Best Practices for Patching CVEs

In a perfect world, you would analyze all CVEs first to determine the priority order for patching. But this just isn’t scalable due to the sheer number of vulnerabilities and how frequently CVEs are discovered. In reality, only a handful of CVEs actually affect your software. Of course, there’s no way to know for certain how a CVE affects your application until it has been analyzed, but because there are so many, including those from transitive dependencies, it is nearly impossible to analyze them all before new CVEs are discovered or in the time between a tight release schedule. Instead, we recommend you start by patching all critical and high-severity CVEs without analysis. ... Preventing, detecting and patching CVEs needs to be a shared responsibility between developers and security teams. It is not sustainable for security teams to bear the responsibility of managing and patching CVEs alone. Development teams can often be hesitant to push frequent updates for fear that updates to software libraries will create bugs in their software.



Quote for the day:

"Our greatest battles are with our own minds." -- Jameson Frank

Daily Tech Digest - September 14, 2022

A vision for making open source more equitable and secure

There have been multiple attempts at providing incentive structures, typically involving sponsorship and bounty systems. Sponsorship makes it possible for consumers of open source software to donate to the projects they favor. Only projects at the top of the tower are typically known and receive sponsorship. This biased selection leads to an imbalance: Foundational bricks that hold up the tower attract few donations, while favorites receive more than they need. In contrast, tea will give package maintainers the opportunity to publish their releases to a decentralized registry powered by a Byzantine fault-tolerant blockchain to eliminate single sources of failure, provide immutable releases, and allow communities to govern their regions of the open-source ecosystem, independent of external agendas. Because of the package manager’s unique position in the developer tool stack—it knows all layers of the tower—it can enable automated and precise value distribution based on actual real-world usage.


Cognitive Overload: The hidden cybersecurity threat

Cognitive overload occurs when workers are trying to take in too much information or execute too many tasks. This typically falls under two areas for cybersecurity analysts: intrinsic load, the piecing together of complex technical information to perform incident response activities; and extraneous load, the other 97% of data in a SIEM that they must filter out, while also handling team conversations and sidebar questions. Ultimately, cognitive overload leads to poor performance levels, a lack of focus, and a lack of fulfillment. This can have particularly detrimental consequences within cybersecurity, where ransomware attacks rose 13% year-over-year – more than the past five years combined. To boot, just under half of senior cyber professionals (45%) have considered quitting the industry altogether because of stress. To accommodate the needs of this critical workforce – and fill the 771,000 cyber positions open today – companies must make easing cognitive overload a top priority. Today, it stems from two major issues. First, organizations typically lack direction in cybersecurity, tasking analysts with a broad and daunting: defend our infrastructure. It’s too abstract and leaves them unsure of their roles and responsibilities. 


Medical device vulnerability could let hackers steal Wi-Fi credentials

A vulnerability found in an interaction between a Wi-Fi-enabled battery system and an infusion pump for the delivery of medication could provide bad actors with a method for stealing access to Wi-Fi networks used by healthcare organizations, according to Boston-based security firm Rapid7. The most serious issue involves Baxter International’s SIGMA Spectrum infusion pump and its associated Wi-Fi battery system, Rapid7 reported this week. The attack requires physical access to the infusion pump. The root of the problem is that the Spectrum battery units store Wi-Fi credential information on the device in non-volatile memory, which means that a bad actor could simply purchase a battery unit, connect it to the infusion pump, and quicky turn it on and off again to force the infusion pump to write Wi-Fi credentials to the battery’s memory. Rapid7 added that the vulnerability carries the additional risk that discarded or resold batteries could also be acquired in order to harvest Wi-Fi credentials from the original organization, if that organization hadn’t been careful about wiping the batteries down before getting rid of them.


Four Action Steps for Shoring Up OT Cybersecurity

Having proactive safeguards in place is important, but it’s also critical to have effective reactive procedures ready to respond to intrusions, especially to quickly restore the integrity of operations, applications, data, or any combination of the three. Key ICS and SCADA functions should be backed up with hot standbys featuring immediate failover capabilities should their primary counterparts be disrupted. For data protection, automated and contemporaneous backups are preferable; or at least they should be done at a weekly interval. Ideally, the backup storage will be off-network and, even better, offsite, too. The former protects backup data in case malware, such as ransomware, succeeds in circumventing defense-in-depth and network segmentation measures and locks it up. ... Like plant health, safety and environment (HSE) programs, cybersecurity should be considered alongside them as a required mainstay risk-reduction program with support from executive management, owners, and the board of directors.


The Future of the Web: The good, the bad and the very weird

The rise of big technology companies over the last two decades has made the internet more usable for most people, but has also lead to the creation of a series of 'walled gardens' controlled by them, within which information is held and not easily relocated. As a result, a small number of very large companies control what you search for online, or where you share information with your friends, or even do your shopping. Even worse, these companies have done much to develop what is effectively 'surveillance capitalism' -- taking the information we have shared with them (about what we do, where we go and who we know) to sell to advertisers and others. As smartphones have become one of the key ways we access the web, that surveillance capitalism now follows us wherever we go. And while the rise of social media (the so-called 'Web 2.0' era) promised to make it possible for individuals to produce and share their own content, it was still mostly the big tech companies that remained the gatekeepers. A platform that was once about openness seems to be dominated by big tech.


Authorization Challenges in a Multitenant System

Restricting users to the data that belongs to their tenant is the most fundamental requirement of multitenant authorization. Tenant isolation barriers are needed to prevent users from accessing sensitive information owned by another account. Such a breach would erode trust in your service and, depending on the type of exposure that occurred, could leave you liable to regulatory penalties. Tenant identification usually occurs early in the lifecycle of a request. Your service should authenticate the user, determine the tenant they belong to, and then limit subsequent interactions to data that’s associated with that tenant. ... Another complication occurs when tenants require unique combinations of roles and actions to mirror their organization’s structures. One org might be satisfied by admin and read-only roles; another may need the admin role to be split into five distinct assignments. The most effective multitenant authorization systems will flexibly accommodate customizations on a per-tenant basis. At the application level, granular permission checks will remain the same; however, the system will need to be configurable so tenants can create their own roles by combining different permissions.


Deployment Patterns in Microservices Architecture

The Multiple Service Instances per Host pattern involves provisioning one or more physical or virtual hosts. Each of the hosts then executes multiple services. In this pattern, there are two variants. Each service instance is a process in one of these variants. In another variant of this pattern, more than one service instance might run simultaneously. One of the most beneficial features of this pattern is its efficiency in terms of resources, as well as its seamless deployment. This pattern has the benefit of having a low overhead, making it possible to start the service quickly. This pattern has the major drawback of requiring a service instance to run in isolation as a separate process. The resource consumption of each instance of a service becomes difficult to determine and monitor when several processes are deployed in the same process. The Service Instance per Host pattern is a deployment strategy in which only one microservice instance can execute on a particular host at a specific time. Note that the host can be a virtual machine or a container running just one service instance simultaneously.


Bursting the Microservices Architectures Bubble

The buzz surrounding microservices in recent years doesn't reflect the sudden emergence of the microservices concept at that time, however. Microservices architectures actually have a long history that stretches back decades. But they didn't really catch on and gain mainstream focus until the early-to-mid 2010s. So, why did everyone go gaga over microservices starting about ten years ago? That's a complex question, but the answer probably involves the popularization around the same time of two other key trends: DevOps and cloud computing. You don't need microservices to do DevOps or use the cloud, but microservices can come in handy in both of these contexts. For DevOps, microservices make it easier in certain important respects to achieve continuous delivery because they allow you to break complex codebases and applications into smaller units that are easier to manage and easier to deploy. And in the cloud, microservices can help to consume cloud resources more efficiently, as well as to improve the reliability of cloud apps.


New Survey Shows 6 Ways to Secure OT Systems

A fundamental principle of OT security is the need to create an air gap between ICS and OT systems and IT systems. This basic network cybersecurity design employs an industrial demilitarized zone to prevent threat actors from moving laterally across systems, but the survey finds that only about half of organizations have an IDMZ within their OT architecture, and 8% are working on it. The healthcare, public health and emergency services sectors were especially behind. Nearly 40% of respondents in those sectors don't have plans to implement an IDMZ. Implementing a DMZ is a basic best practice, Ford says. "The risk is lateral movement where breach can move from IT to OT or vice versa, or from low-value network assets to high-value network assets," Ford says. "The more attackers can penetrate your infrastructure, the greater damage and downtime they can cause. Segmentation in DMZ, demilitarized zones, provide an air gap between IT and OT, and additional segmentation can further protect business-critical assets with strong access controls, firewalls and policy rules based on zero trust." 


Wearable devices: invasion of privacy or health necessity?

Dangling a carrot of free technology is a way to engage customers, but protection is vital should wearable technology be compromised. This data isn’t simply name, address and payment details, but potentially highly personal data about an individuals’ wellbeing. The insurance industry will need to develop solutions that help protect the policyholder, and reassure the individual that their data is secure. With GDPR, UK-GDPR and other regulations globally to be highly considered, insurers are spending considerable time and investment in ensuring data is well protected. The ubiquitous nature of wearables has helped increase engagement with insurance, and customers have been introduced to the numerous health benefits of using these devices. If you’ve already got a device tracking your wellbeing, why would you not want a doctor also doing the same? By becoming an extension to the wearable itself, wearable insurance is likely to be generally accepted by customers.



Quote for the day:

"Different times need different types of leadership." -- Park Geun-hye

Daily Tech Digest - March 10, 2022

Sharp rise in SMB cyberattacks by Russia and China

Over the last several weeks, there has been a sharp rise in activity from countries with consistently high levels of both attempted and successful attacks originating within their borders — Russia and China. The vast volumes of data analyzed suggests these countries may even be coordinating attack efforts. Per analysis available, attack trend lines that compare Russia and China show almost the exact same pattern. Juxtaposed to a chart from Germany indicates that it is not even close to the same pattern, leading to educated speculation that these countries could be coordinating efforts. According to the Brookings Institute, “The U.S. National Security Strategy declares Russia and China the two top threats to U.S. national security. At the best of times, U.S.-Russia ties are a mixture of cooperation and competition, but today they are largely adversarial… Russia’s increasingly close relationship with China represents an ongoing challenge for the United States. While there is little that Washington can do to draw Moscow away from Beijing, it should not pursue policies that drive the two countries closer together, such as the trade war with China and rafts of sanctions against Russia.”


Threat intelligence: why it matters, and what best practice looks like

While no two organisations are the same, one useful way to think about deploying threat intelligence is to focus on three stages: monitoring, integration and analysis. In the early days of a project threat intelligence strategy, it’s unlikely that you’ll have the relevant expertise, time, or resources that are necessary to support proactive intelligence analysis yet. However, by collecting information from various sources and monitoring them for threat indicators relevant to your business, it’s possible to drive significant value. This could include things like leaked corporate credentials, mentions of your product on the dark web or looking for typosquats of your corporate brands in domain name registrations that are important as you begin your journey. The intelligence gained from doing so could help to inform the IT department for password resets, phishing email campaigns targeting employees and accelerate efforts to verify potential security incident efforts. Next comes integration. 


A Proposal For Type Syntax in JavaScript

When we’ve been asked "when are types coming to JavaScript?", we’ve had to hesitate to answer. Historically, the problem was that if you asked developers what they had in mind for types in JavaScript, you’d get many different answers. Some felt that types should be totally ignored, while others felt like they should have some meaning – possibly that they should enforce some sort of runtime validation, or that they should be introspectable, or that they should act as hints to the engine for optimization, and more! But in the last few years we’ve seen people converge more towards a design that works well with the direction TypeScript has moved towards – that types are totally ignored and erasable syntax at runtime. This convergence, alongside the broad use of TypeScript, made us feel more confident when several JavaScript and TypeScript developers outside of our core team approached us once more about a proposal called "types as comments". The idea of this proposal is that JavaScript could carve out a set of syntax for types that engines would entirely ignore, but which tools like TypeScript, Flow, and others could use.


Have smart wearables increased productivity of employees in the hybrid working environment?

Smartwatches offer myriads of features that help individuals take charge of their daily tasks and complete them quicker and with ease. From using the voice commands to dictate emails to sending short messages or to track their physical movements, water intake, SpO2, heart rate, stress, breathing exercises, stretching, etc., these devices have enabled us to tirelessly complete tasks without compromising on fitness and health. SpO2 has emerged as an important measure for fitness over the last two years. It is satisfying to keep a check on it from time to time just in case any medical assistance is required. On the other hand, earbuds let you answer calls hands free, which makes it easier to make notes or go on with other tasks, thereby boosting productivity. Features like ANC and ENC take care of the background noise to further enhance the quality of audio experience. And in case, you’re out running an errand during office hours, and forget a crucial meeting that was scheduled, your smartwatch will notify you. You can also pick up the call via your earbuds while you drive back home, and it is really happening out there.


Best Practices for Running Stateful Applications on Kubernetes

A common approach is to run your stateful application in a VM or bare metal machine, and have resources in your Kubernetes cluster communicate with it. The stateful application becomes an external integration from the perspective of pods in your cluster. The upside of this approach is that it allows you to run existing stateful applications as is, with no refactoring or re-architecture. If the application is able to scale up to meet the workloads required by the Kubernetes cluster, you do not need Kubernetes’ fancy auto scaling and provisioning mechanisms. The downside is that by maintaining a non-Kubernetes resource outside your cluster, you need to have a way of monitoring processes, performing configuration management, performing load balancing and service discovery for that application. ... A second, equally common approach is to run stateful applications as a managed cloud service. For example, if you need to run a SQL database with a containerized application, and you are running in AWS, you can use Amazon’s Relational Database Service (RDS). 


3 DevSecOps Practices to Minimize Impact of the Next Log4Shell

Security is tough to get right, and it’s made more difficult by market pressures, cloud complexity and the growing prevalence of open source libraries. This has expanded the typical enterprise’s cyberattack surface to many times its size of several years ago. It has also provided more opportunities for potentially critical vulnerabilities to enter the development cycle and then persist into production. Log4Shell is the poster child for that problem. As a result, it’s more important than ever that we pay more than lip service to the concept of security as a shared responsibility within the organization. “Shared responsibility” is often used to mean greater boardroom buy-in, or in the context of behavioral change among staff, but it’s just as important in IT departments. We need developers to become more skilled in building secure products, but we also need to ensure apps in production continue running securely. Breaking down the silos between developers, operations and security teams will drive true DevSecOps practices. To get there, organizations should unify teams around a centralized platform that gives them visibility and control.


Forrester predicts RPA software market growth will begin to flatten next year

Forrester is predicting that some of the money going to RPA software today will begin to shift to broader AI automation solutions. It’s worth noting that while RPA has robotic in its name, it’s not really AI in a true sense. The bots in this case are more like scripts completing a set of highly manual tasks. By comparison, no-code automation solutions make it easy to create a workflow, presumably without consulting help. AI provides a way to intelligently implement tasks and take steps based on the data instead of moving through a set of highly defined hard-coded work. This decline is coming in spite of investor enthusiasm for the market from investors who valued UiPath at $35 billion when it raised $750 million last year, its last private fundraise prior to its IPO. Today the company’s market cap sits at close to $15 billion, certainly a precipitous drop in value, even taking into consideration the big hit software companies have been taking in the stock market over the last year. Meanwhile, we also saw some pretty significant consolidation as companies like SAP bought Signavio, ServiceNow acquired Intellibot and Salesforce snagged Servicetrace, as several examples.


The rise of confidential blockchains

Cryptoeconomics has long been founded upon the proof-of-work consensus algorithm. This algorithm has proven to be truly resilient to Byzantine attacks. But there are downsides. First, the performance of proof-of-work blockchains remains poor. Bitcoin, for example, still operates at seven transactions per second. Second, proof-of-work blockchains are also extremely energy-intensive. Today, the process of creating Bitcoin consumes around 91 terawatt-hours of electricity annually. This is more energy than is used by Finland, a nation of about 5.5 million people. While, there is a section of commentators that consider this to be a necessary cost of protecting the global cryptocurrency system, rather than just the cost of running a digital payment system. There is another section that thinks that this cost could be done away with by developing proof-of-stake consensus protocols, as they deliver much higher throughput of transactions. Indeed, the proof-of-stake blockchains built on the Tendermint framework deliver upwards of 10,000 transactions per second. However, proof-of-stake blockchains also have some downsides.


Teaming is hard because you’re probably not really on a team

Real teams are all about solving the hardest, most complex problems. A diverse set of perspectives and skills is required to untangle these sorts of problems, for which there is no obvious solution. Members of a real team trust each other and work toward a common goal. Real teams are thoughtful, they argue, and they push each other to do better. They require nimble leaders who prioritize building connections within the team. They create clear boundaries that reinforce a strong sense of trust. They have a shared purpose and clear norms. And, importantly, they produce a collective output. If you see a group of people focusing intently on solving a single, very complex problem, you’re probably looking at a real team. Working groups are all about efficiency. Most people spend most of their productive time in working groups. We’ll say it again: there is nothing wrong with being in a working group. In fact, working groups are often best suited to the tasks at hand. Managers of working groups focus heavily on techniques to make their collaboration more efficient. 


How machine learning can course-correct inherent biases in recruiting

Often, if the job opening is attractive, there may be hundreds of people applying for a single position. Toward the end of the hiring process, all of the candidates are more than good enough to do the job but they don’t make the final cut. How hiring managers decide between them is often on minute mistakes. These are an underutilised resource for HR teams when recruiting. These candidates have already proven themselves, but historically there hasn’t been an easy way to match them with other companies who would likely hire them based on their performance. Joonko has developed a platform that is made up entirely of silver medalists, pre-qualified candidates who have passed at least two stages of the recruiting process, and match these candidates with future jobs, thus saving significant time in the recruiting process. ... “Silver medalists were already vetted by their peers, and the conversation with the candidates could be more around the specific needs of the organisation, without the excruciating part of the interview process.”



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - August 21, 2021

Can AGI take the next step toward genuine intelligence?

To take the next step on the road to genuine intelligence, AGI needs to create its underpinnings by emulating the capabilities of a three-year-old. Take a look at how a three-year-old playing with blocks learns. Using multiple senses and interaction with objects over time, the child learns that blocks are solid and can’t move through each other, that if the blocks are stacked too high they will fall over, that round blocks roll and square blocks don’t, and so on. A three-year-old, of course, has an advantage over AI in that he or she learns everything in the context of everything else. Today’s AI has no context. Images of blocks are just different arrangements of pixels. Neither image-based AI (think facial recognition) nor word-based AI (like Alexa) has the context of a “thing” like the child’s block which exists in reality, is more-or-less permanent, and is susceptible to basic laws of physics. This kind of low-level logic and common sense in the human brain is not completely understood but human intelligence develops within the context of human goals, emotions, and instincts. Humanlike goals and instincts would not form the best basis for AGI.


How to take advantage of Android 12’s new privacy options

First and foremost in the Android 12 privacy lineup is Google’s shiny new Privacy Dashboard. It’s essentially a streamlined command center that lets you see how different apps are accessing data on your device so you can clamp down on that access as needed. ... Next on the Android 12 privacy list is a feature you’ll occasionally see on your screen but whose message might not always be obvious. Whenever an app is accessing your phone’s camera or microphone — even if only in the background — Android 12 will place an indicator in the upper-right corner of your screen to alert you. When the indicator first appears, it shows an icon that corresponds with the exact manner of access. But that icon remains visible only for a second or so, after which point the indicator changes to a tiny green dot. So how can you know what’s being accessed and which app is responsible? The secret is in the swipe down: Anytime you see a green dot in the corner of your screen, swipe down once from the top of the display. The dot will expand back to that full icon, and you can then tap it to see exactly what’s involved.


Achieving Harmonious Orchestration with Microservices

The interdependency of your microservices-based architecture also complicates logging and makes log aggregation a vital part of a successful approach. Sarah Wells, the technical director at the Financial Times, has overseen her team’s migration of more than 150 microservices to Kubernetes. Ahead of this project, while creating an effective log aggregation system, Wells cited the need for selectively choosing metrics and named attributes that identify the event, along with all the surrounding occurrences happening as part of it. Correlating related services ensures that a system is designed to flag genuinely meaningful issues as they happen. In her recent talk at QCon, she also notes the importance of understanding rate limits when constructing your log aggregation. As she pointed out, when it comes to logs, you often don’t know if you’ve lost a record of something important until it’s too late. A great approach is to implement a process that turns any situation into a request. For instance, the next time your team finds itself looking for a piece of information it deems useful, don’t just fulfill the request, log it with your next team’s process review to see whether you can expand your reporting metrics.


How Ready Are You for a Ransomware Attack?

Setting the bar high enough to protect against initial entry is a laudable goal, but also adheres to the law of diminishing returns. This means the focus must shift towards improving how difficult it is for an attacker to move around your environment once they have gotten inside. This phase of the attack often requires some manual control, so identifying and disrupting command and control (C2) channels can pay significant dividends – but realize that only the least sophisticated attacker will reuse the same domains and IPs of a previous attack. So rather than looking for C2 communications via threat intel feeds, your approach needs to be to look for patterns of behavior which look like remote-access trojans (RATs) or hidden tunnels (suspicious forms of beaconing). Barriers to privilege escalation and lateral movement come down to cyber-hygiene related to patching (are there easily accessible exploits for local privilege escalation?), rights management (are accounts granted overly generous privileges?) and network segmentation (is it easy to traverse the network?). Most of the current raft of ransomware attacks have utilized the serial compromise of credentials to move from the initial point-of-entry to more useful parts of the network.


The rise and fall of merit

Wooldridge identifies Plato’s Republic as the origin of the concept of meritocracy, in which the Athenian philosopher imagined a society run by an intellectual elite, “who have the ability to think more deeply, see more clearly and rule more justly than anyone else.” Crucially, Plato’s ruling class was remade each generation—aristocrats were not assumed to pass on their talents—and it prized women as highly as men. Wooldridge finds meritocratic leanings in other pre-modern societies, including China, which began in the fifth century to use exams to recruit civil servants. But it was the expansion of the state in Europe in the early modern period that saw meritocracy first take root, albeit in a paradoxical way. As states expanded, demand for capable bureaucrats outgrew the ability of the aristocracy to produce them. The solution was to look downward and offer patronage to talented lowborns. Men such as French dramatist Jean Racine; London diarist Samuel Pepys; economist Adam Smith; and Henry VIII’s right-hand man, Thomas Cromwell, were all plucked from obscurity by favoritism. 


Intel Advances Architecture for Data Center, HPC-AI and Client Computing

This x86 core is not only the highest performing CPU core Intel has ever built, but it also delivers a step function in CPU architecture performance that will drive the next decade of compute. It was designed as a wider, deeper and smarter architecture to expose more parallelism, increase execution parallelism, reduce latency and increase general purpose performance. It also helps support large data and large code footprint applications. Performance-core provides a Geomean improvement of about 19%, across a wide range of workloads over our current 11th Gen Intel® Core™ architecture (Cypress Cove core) at the same frequency. Targeted for data center processors and for the evolving trends in machine learning, Performance-core brings dedicated hardware, including Intel's new Advanced Matrix Extensions (AMX), to perform matrix multiplication operations for an order of magnitude performance – a nearly 8x increase in artificial intelligence acceleration.1 This is architected for software ease of use, leveraging the x86 programing model.


A Soft, Wearable Brain–Machine Interface

Being both flexible and soft, the EEG scalp can be worn over hair and requires no gels or pastes to keep in place. The improved signal recording is largely down to the micro-needle electrodes, invisible to the naked eye, which penetrate the outermost layer of the skin. "You won't feel anything because [they are] too small to be detected by nerves," says Woon-Hong Yeo of the Georgia Institute of Technology. In conventional EEG set-ups, he adds, any motion like blinking or teeth grinding by the wearer causes signal degradation. "But once you make it ultra-light, thin, like our device, then you can minimize all of those motion issues." The team used machine learning to analyze and classify the neural signals received by the system and identify when the wearer was imagining motor activity. That, says Yeo, is the essential component of a BMI, to distinguish between different types of inputs. "Typically, people use machine learning or deep learning… We used convolutional neural networks." This type of deep learning is typically used in computer vision tasks such as pattern recognition or facial recognition, and "not exclusively for brain signals," Yeo adds. 


How to proactively defend against Mozi IoT botnet

While the botnet itself is not new, Microsoft’s IoT security researchers recently discovered that Mozi has evolved to achieve persistence on network gateways manufactured by Netgear, Huawei, and ZTE. It does this using clever persistence techniques that are specifically adapted to each gateway’s particular architecture. Network gateways are a particularly juicy target for adversaries because they are ideal as initial access points to corporate networks. Adversaries can search the internet for vulnerable devices via scanning tools like Shodan, infect them, perform reconnaissance, and then move laterally to compromise higher value targets—including information systems and critical industrial control system (ICS) devices in the operational technology (OT) networks. By infecting routers, they can perform man-in-the-middle (MITM) attacks—via HTTP hijacking and DNS spoofing—to compromise endpoints and deploy ransomware or cause safety incidents in OT facilities. In the diagram below we show just one example of how the vulnerabilities and newly discovered persistence techniques could be used together.


CBAP certification: A high-profile credential for business analysts

CBAP is the most advanced of IIBA’s core sequence of credentials for business analysts. It follows the Entry Certificate in Business Analysis (ECBA) and the Certification for Competency in Business Analysis (CCBA). As you might expect, the requirements get more extensive as you climb the ladder: CBAP requires more training, work experience, and knowledge area expertise. AdaptiveUS, a company that offers training for all of IIBA’s certs, breaks down the various requirements, but the important thing to know is that CBAP holders are at the top of the heap; while you don’t need to have the lower-level certs to get your CBAP certification, you should be fairly well established in your career as a BA before you consider it. Like IIBA’s other certs, the CBAP draws from A Guide to the Business Analysis Body of Knowledge, also known as the BABOK Guide. The BABOK Guide is a publication from IIBA that aims to serve as a bible for the business analysis industry, collecting best practices from real-world practitioners. It was first published in 2005 and is continuously updated. 


A Short Introduction to Apache Iceberg

Partitioning reduces the query response time in Apache Hive as data is stored in horizontal slices. In Hive partitioning, partitions are explicit and appear as a column and must be given partition values. Due to this approach, Hive having several issues like not being able to validate partition values is so fully dependent on the writer to produce the correct value, 100% dependent on the user to write queries correctly, Working queries are tightly coupled with the table’s partitioning scheme, so partitioning configuration cannot be changed without breaking queries, etc. Apache Iceberg introduces the concept of hidden partitioning where the reading of unnecessary partitions can be avoided automatically. Data consumers that fire the queries don’t need to know how the table is partitioned and add extra filters to their queries. Iceberg partition layouts can evolve as needed. Iceberg can hide partitioning because it does not require user-maintained partition columns. Iceberg produces partition values by taking a column value and optionally transforming it.



Quote for the day:

"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.