Categories
ARDUINO ARM ARM ARCHITECTURE ARM PROGRAMMING COMPUTER HARDWARE ELECTRONICS EMBEDDED EMBEDDED COMPUTERS EMBEDDED PROGRAMMING HARDWARE IOT PROGRAMMING TUTORIALS

Arduino Portenta Technical Specification & Price

In this article, we will take a look at the technical specification of Arduino Portenta. But along with this, we will also learn about its price during its release date.

Just today, Arduino launched its new product called “Arduino Portenta” at CES 2020 show in Las Vegas. So far from what I have learnt, it is an IoT device. Which means that you can use it to connect things in your house to the internet!

But not just that! The company is also claiming that we can use Arduino Portenta even in industrial applications.

Ok, all this is fine. But why do we even need this device in the first place? To answer this question, we first need to discuss the technical details of Arduino Portenta. So let us first do that!

Arduino Portenta H7 Technical Specification

Arduino Portenta H7 Processor

The Arduino Portenta H7 is driven by the ST Microelectronics’ STM32H747XI low power processor. This processor is made up of dual ARM Cortex cores.

The first ARM core present in it is a Cortex-M7 running at 480 MHz. On the other hand, the second ARM core is made up of Cortex-M4 and running at 240 MHz. So with these two cores together, this ARM processor is able to run Arduino code, Python and Javascript code as well!

Now this is very interesting! Since it can run Javascript, many of the web developers will be able to work on it!

What OS does the Portenta H7 run?

We got to know that Portenta H7 is running on Arm’s Mbed OS! This is amazing! Being able to run an embedded operating system will mean we can make use of all the resources efficiently!

What type of connectivity does the Portenta H7 have?

Arduino mentioned that they support all the standard connectivity we can expect. So that means it has support for Bluetooth Low Energy, WiFi and LTE as well!

UPDATE On The Radio Module Of Arduino Portenta

We just got to know that the Arduino Portenta features a Murata 1DX dual WiFi 802.11 chipset. This chipset also has support for Bluetooth 5.1 BR/EDR/LE!

Arduino Portenta H7 with Technical Specification and release date
Arduino Portenta H7 IoT Module

What GPU Type Can We Find In Arduino Portenta H7?

The technical specification of the Arduino Portenta H7 mentions that it features a Chrom-ART graphical hardware accelerator.

What does the technical specification say about Timers in Arduino Portenta H7?

Alright guys. We know that in order for us to work with any time sensitive operation we need support of timers. So how does we score here? Well luckily on the timer front, the board has a total of 22 watchdogs and general purpose timers in it. So we have all the room to take its advantages!

But what about the UART ports in Portenta H7?

Ofcourse even though we have moved towards wireless connectivity, we still need good old UART ports for many reasons. So how do we fare on this front? Well the Arduino Portenta H7 strikes back once again! It is exposing a total of 4 UART Ports. And among these 4 ports, two of the UART ports have support for flow control.

How many connectors are exposed on the Arduino Portenta H7 board?

The Arduino Portenta H7 board exposes a total of 160 pin connectors. These connectors are grouped into two 80 pin sets and will expose all the peripherals present in the Portenta H7 board.

What type of USB does Arduino Portenta H7 support?

On the USB front, Arduino Portenta H7 exposes a USB Type C connector. This USB-C connector has support for host/device, displayPort out. It can operate at high speed or full speed USB protocol configuration. The Portenta H7 USB-C also supports Power delivery.

What is the operating temperature range of Arduino Portenta H7?

Arduino Portenta H7 can operate at a temperature range lying between -40 °C to +85 °C when running without the wireless module. But with the wireless module, Portenta H7 can operate in the temperature zone of  -10 °C to +55 °C.

What is the operating voltage of Arduino Portenta H7 acccording to its technical specification?

Arduino Portenta H7 works at 3.3 Volts.

What type of battery does Portenta H7 support?

The Arduino Portenta H7 runs on a Li-Po battery. This battery has an operating voltage of 3.7 Volts and a discharge rating of 7000mAh.

Does Arduino Portenta H7 support an SD Card?

Yes it does! The Portenta board has an SD card interface support. However, this SD Card interface is available only through an expansion port. So that is a bit of a bummer! 🙁

But now that we know the Arduino Portenta H7 technical specification, when will it Release?

I know I know. No matter how good the device is, we cannot take advantage of it until it gets in our hands, right? So we can understand when you are eager to know when this module is going to be released.

So from what we got to know, Arduino Portenta H7 is already made availale for beta testers. But it is going to become available for everyone by February 2020! Guys, that means we are just a month away from getting hold of it in our hands!

Now that we went through it’s technical specification, What will be the price of Arduino Portenta H7?

Cool! So now that we know we can get hold of Portenta by next month, our next question is obviously this.

How much it is going to cost?

Unfortunately at this point in time, I could not find an answer (Look for update at the end of this article for pricing information) for this. So I will continue to look out for this information. Once I find it, I will revisit this article and update it with the latest price. But until then, I can only leave you guessing about it.

But on the other hand, if you have any idea about it, let me know in the comments below. And not just that, if you have any other information about Portenta H7 in general that I have missed here, do let me know. In this way, I can update this article in the future for others to benefit out of it.

So there you have it. I have shared all the information I had about Arduino Portenta H7 here. While for me this device is something I am eagerly looking forward to, I wish it had a better name. Somehow for me, the name Portenta H7 is becoming difficult to remember. But may be it is just me I guess.

So any case, I will end this article at this point. So see you guys again in the next article. Until then, take care! 🙂

Latest Update On Arduino Portenta Price

We just got to know that Arduino Portenta will cost USD 99.90 + Tax.

So the cost of Arduino Portenta in the US will be $100 + taxes

The cost of Arduino Portenta in the UK will be around GBP 77 + taxes

The cost of Arduino Portenta in the European countries will be around 90 + taxes Euros

And finally the cost of Arduino Portenta in India will be around Rs.7200 + taxes

Categories
JAVASCRIPT STATIC WEBSITES TUTORIALS WEB BROWSER WEB DEVELOPMENT WEB SERVER

This one value in Javascript is not equal to itself!

We know that Javascript supports all kinds of values such as strings, numbers, constants etc. All these values are deterministic values in that their weights always remains the same. For example, an integer value of 25 is always equal to 25 in Javascript no matter what. Similarly a string value of “Hello” is also always equal to another string “Hello”.

In other words, these values in Javascript can always be compared with another value to determine if they are the same or different. To understand this better with an example, let us open up our browser console. In my case I am using Google Chrome browser console where we will create 3 variables with these values:

We can note from the above Javascript demo video that when the variable a and c are compared, since both their values are same holding a value of 2, they return true when compared with each other. On the other hand when variable a was compared with b, since their values were different, the comparison resulted in a return value of false.

This is true for all type of values present in Javascript – be it string, integers, floats, booleans anything you can think of.

However, there exists one special value in Javascript that is never equal to another variable having the same value. In other words its value is never equal to itself. This value is the NaN value!

NaN in Javascript stands for “Not a Number” and it is that one special value in Javascript which does not return true if it is compared with itself.

Why does NaN not equal to itself in Javascript?

Now you might be wondering why a NaN value does not equal to itself? The answer for this lies in the way Javascript language has been designed.

NaN or Not a Number is a special value in Javascript which is used to represent a nonsensical value – that is it is the value returned whenever a non sensical operation is performed. Now this is where it gets interesting. Why does a non sensical value not be the same all the time or at all the place? In other words, why is this happening here:

Why is it returning false?

The answer is that NaN as mentioned earlier is a value that is used to represent a non sensical values. So if the result of an operation performed is something that cannot be represented by ordinary or normal values, Javascript returns a value of NaN.

Now, if two operations results in non sensical values, they are not necessarily equal. Each of these operations can be returning two non sensical values of different weights. However, they both need to be represented by the value of NaN. Hence, the Javascript treats two NaNs as two different values and never equal to each other.

I learnt about this and many other similar anomalies in the book Eloquent Javascript. This is a very good book to learn and understand such interesting things about Javascript so I will definitely recommend this to anyone interesting in learning Javascript in depth.

If you are also aware of any other similar interesting things about Javascript do let me know in the comments below. Until then, happy coding! 🙂

Categories
PYTHON TUTORIALS

How to scrape HTML tables using Python

Python is a versatile programming language that can be used to write programs of varied applications. The number of available libraries in Python makes it one of the most useful programming languages that can be used to perform numerous tasks. Be it writing a simple Python script to automate basic shell command operations in an Operating System, or a program to perform data analysis or Machine learning, Python excels them in all, thanks to the available Python Library packages.

In this article, we will explore and learn about using Python programming language to perform one of the most common application in the world of web, HTML scraping or web scraping using Python.

Web scraper Illustrative picture

All the websites we view in our favorite web browser is written using mainly 3 important web front-end programming languages – HTML, CSS and Javascript. Each of these 3 programming languages have a specific role to play in the creation of a web page. They are:

HTML – HTML is a simple Markup language used to create various HTML elements that make up a web page. The elements including Headings, Paragraphs, Lists, Images, tables, headers and footers, links etc that we see in a web page are all different HTML elements. So in other words, HTML Markup language is used to create these HTML elements that we see as part of a web page. HTML here stands for Hyper Text Markup Language.

CSS – CSS is a design style programming language that is mainly responsible for implementing the look and feel of the above mentioned HTML web page elements. You might have seen that same contents of a table are displayed in two different styles in two different websites. This is because, even though both use the same HTML Table element to create this content, the HTML Table is styled in different formats by each of these websites. This is achieved using the CSS programming language. CSS here stands for Cascading Style Sheets.

Javascript – Javascript is another programming language that was mainly developed for use in web browsers, but nowadays has made its way into all parts of web development – be it in the front end (browser side) or at the back end (server side).  Javascript programming language on the front end side is used to provide interactive functionalities to the HTML elements of a web page. For example, In most of the web pages that we see these days, we might have seen the infinite scrolling feature where in only first few content elements are loaded in a web page and the rest are loaded dynamically as we scroll to the bottom of the web page. Twitter home page is a good example of this. This sort of interactive functionalities are added using Javascript language in a web page. Almost all interactivity of a web page is achieved using the help of Javascript these days.

When a web page is rendered in a browser on the user’s computer, the webpage includes all these HTML elements with all the texts and image content of the web page all embedded within themselves. So, we can actually retrieve these text and image contents from a web page using a programming language such as Python. Such a process is actually called “Web Scraping” in the web development world.

Scraping A Web Page Using Python

In order to learn how to scrape a web page using Python, we will try to scrape a table that lists mountains across the world ordered by their elevation, as seen in the the official Wikipedia website:

https://en.wikipedia.org/wiki/List_of_mountains_by_elevation

In this Wikipedia web page, we notice the presence of several tables. The first table mainly displays list of mountains having elevation of 8000 meters or above. It is this web page’s table that we would like to scrape using Python.

Introduction to BeautifulSoup library in Python

As mentioned in the beginning of this article, Python comes with myriad of useful libraries that one can use to perform complex tasks with ease by using these libraries’ APIs. One such library is called the “BeautifulSoup” library and is one of the most interesting library that one can use in Python to perform web scraping.

BeautifulSoup Python library’s functionalities

One of the most important functionality of Python’s BeautifulSoup library is its ability to parse and interpret HTML tags. All html elements are represented using what are called the HTML tags. Some examples of such tags are <h1> for main heading, <p> for paragraphs and <table> for tables. Python’s BeautifulSoup library understands these tags and can extract information present in a web page within these tags. BeautifulSoup library exposes these APIs to us to use these functionalities in our own Python programs, which we will make use of in our Python web scraper program that we are about to write.

BeautifulSoup library is available in Python libraries repository under the name of ‘bs4’ and can be installed into your computer system for developing the web scraper using the command:

pip install bs4

BeaultifulSoup library example

In order to understand how a BeautifulSoup library works, let us download a Wikipedia web page into our local system. For this example, let us download the following Wikipedia web page:

https://en.wikipedia.org/wiki/List_of_mountains_by_elevation

Let us save the web page from above link as mountains.html in our local home directory (~/).

We can then read the content of this web page using Python’s BeautifulSoup library using the following commands:

from bs4 import BeautifulSoup

input = open('~/mountains.html', 'r')

soup = BeautifulSoup(input.read(),'html.parser')

tables = soup.find_all('table')

print tables

Well, thats a mouthful of code you just read there. Let us try to understand it in a step by step manner to simplify it and understand what we are doing here:
The first line:

from bs4 import BeautifulSoup

Simply imports the BeautifulSoup library form the Python’s bs4 library we just installed. The next line:

input = open('~/mountains.html', 'r')

is simply using Python’s file operation function open( ) to open the previously downloaded mountain.html web page. In the next line:

soup = BeautifulSoup(input.read(),'html.parser') 

we call the BeautifulSoup function and pass it as one of the argument, content of our mountain.html webpage using the Python’s standard file operation function read( ). Another argument that we pass along is ‘html.parser’. This tells the BeautifulSoup function to interpret the content of the passed input content as HTML data and use HTML parser to parse it. The resulting parsed HTML data is assigned to the variable ‘soup’ for later usage. In the next line we do this:

tables = soup.find_all('table')

What the above line shows is that we are now searching for all the available HTML tables in the ‘soup’ variable and assign it to a new variable tables. So, by now we should have all the HTML tables present in mountain.html file assigned to the Python list variable ‘tables’.

Finally, we print the content of this tables variable that should print all the tables found in our mountains.html web page!

While this is good and all, we did a manual download of the Wikipedia web page, saved it as mountain.html and only then used Python’s BeautifulSoup library to process it. However, wouldn’t it be great if we could eliminate this manual step and do even this programmatically? As a next step, we would do exactly this using a new Python library – urllib introduced next.

Introduction to Python Urllib library

Another important Python library that we are going to use to create our web scraper program is called the urllib library. Let us see what functionalities Python’s urllib library brings to us.

Python’s Urllib library is used to fetch contents of web page url. It provides us with APIs such as open(), read() etc to open a web page and read its contents back. Url here stands for Uniform Resource Locators. They are the static web addresses that one can use to locate a web page and read/fetch its contents back.

How to install Python Urllib library?

We can install the Python Urllib library using the following pip command:

pip install urllib

Python Urllib Example

Here is a simple example of urllib library that is used to fetch the content of a Wikipedia web page.

First we will import the urllib library into our Python program environment using Python’s import command:

import urllib

The Urllib library exposes several useful APIs for other programs to make use of. One such API is the request API that one can use to open a web page and read its content. The request API in turn exposes two more functions called the urlopen( ) function and the read( ) function. An example of a Python program using this API is given below, where we are trying to read the contents of a Wikipedia web page:

import urllib.request

content = urllib.request.urlopen('https://en.wikipedia.org/wiki/List_of_mountains_by_elevation')

read_content = content.read()

We can actually combine the above two function calls of the Urllib’s request API – urlopen( ) and read( ) functions into a single line as shown below:

source = urllib.request.urlopen('https://en.wikipedia.org/wiki/List_of_mountains_by_elevation').read()

Python Web Scraper using Urllib and BeautifulSoup libraries

Finally, combining the APIs provided by both BeautifulSoup and Urllib libraries, we can write our web scraper program that reads a Wikipedia page’s contents, extracts its tables, and print the content of a particular table as shown below:

from bs4 import BeautifulSoup
import urllib.request

source = urllib.request.urlopen('https://en.wikipedia.org/wiki/List_of_mountains_by_elevation').read()
soup = BeautifulSoup(source,'html.parser')
tables = soup.find_all('table')
table_rows = tables[0].find_all('tr')
for tr in table_rows:
print (tr)

The above program is our intended Python web scraper program that can go fetch a Wikipedia page using urllib library. We can then extract all the contents of the web page and find a way to access each of these HTML elements using the Python BeautifulSoup library.

Here we are simply printing the first “table” element of the Wikipedia page, however BeautifulSoup can be used to perform many more complex scraping operations than what has been shown here.

I will explain more such operations one can perform using BeautifulSoup Python library in future articles, but this should serve as an entry point for someone who is just getting started with Python programming language for web scraping.