In this tutorial you will use Wia.io to collect and display data from the RPi. You will also use Wia to send commands to a device.
Create an account on Wia.io
On your Raspberry Pi, install Wia by opening a terminal window and running the following command:
pip install wia
iot-week9 for your python programsmkdir iot-week9
Go to the Wia Dashboard and select Create a New Space then select Devices. Add a device and give it the name SensePi. Now, in the Configuration tab for your device, you will find device_secret_key which should begin with d_sk. This will be important later on.
In the iot-week9 directory, create a file called sensehat_wia.py containing following code:
from wia import Wia
wia = Wia()
wia.access_token = "Your access token"
wia.Event.publish(name="temperature", data=21.5)
Devices and check the temperature event has appeared in the Events tab for your device.
Overview tab and click the Add a Widget button. Add a widget called Temperature. For the event field, make sure you type the event name exactly as it appears in the code (mind your case!). Your overview tab should be similar to the following:
All going well, you now have code that interacts and creates events in Wia
Now lets update the code to use the SenseHat sensor values to create events:
senshat_wia.py with the following code:from wia import Wia
from sense_hat import SenseHat
sense = SenseHat()
wia = Wia()
wia.access_token = "Your access token"
temp=round(sense.get_temperature(),2)
wia.Event.publish(name="temperature", data=temp)
You are now taking the temperature sensor reading from the SenseHat and publishing it to Wia
sensehat_wia.py to do the following:pressure and humidity events in your Wia space every 15 seconds.Temperature widget type to a graph(leave default values for Time period and Aggregate function)
All going well, your overview tab should now look like this and update every 15 seconds.
html.index.html and add the following content: <!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body>
<h1>SenseHat Data</h1>
</body>
</html>

Anyone can view this widget and embed it in any website. You should also see Embed code, which will start with <iframe> and end with </iframe>. Copy the entire code and paste it below the <h1>SenseHat Weather Station</h1> line and above the </body> line.index.html page in a browser. It should look similar to the following:
You can use GitHub to host your webpage so that anybody on the web can view it.
If you don't have a github account already, you can make one here.
Once you are set up with github, create a new repository and name it your-github-username.github.io. Check the box to initialize with a README.
Now, navigate to your new repository and create a new file. It must be named index.html. Copy and paste the code from index.html.
Click commit changes. Now, visit your site at https://username.github.io. You're on the Web!
You will now use Wia events, commands and flows to control the SenseHat using facial expressions. Wia uses MQTT and the publish-subscribe pattern we talked about in class in their commands functionality.
Commands and then click Add Commandhappy-face and sad-face commands. You will use these commands to control the Raspberry Pi with a smile!

You will now write a small program that will take a photo and create photo events in Wia. PLEASE CHOOSE ONE OF THE FOLLOWING OPTIONS TO ACCOMPISH THIS
sudo raspi-config5 Interfacing Options
P1 Camera and enable the camera interface:
Your camera is ready to go! Exit raspi-conf by selecting back/exit.
iot-week9 create a file called snap.py.from wia import Wia
import time
from picamera import PiCamera
wia = Wia()
## INSERT YOUR SECRET KET
wia.access_token = 'YOUR_SECRET_KEY'
camera = PiCamera()
## Halt execution until
input('Look at the camera and hit "Enter" to take a pic...')
## Start up PiCam
camera.start_preview()
## sleep for a few seconds to let camera focus/adjust to light
time.sleep(5)
## Capture photo
camera.capture('/home/pi/image.jpg')
## Stop the PiCam
camera.stop_preview()
## Publish "photo" event to Wia. Include the photo file.
result = wia.Event.publish(name='photo', file=open('/home/pi/image.jpg', 'rb'))
iot-week9 directory run the script by entering the following command: python3 snap.py
/home/pi/image.jpg.photo event has appeared.
Post photo from webcam
Install OpenCV
sudo pip install opencv-python
sudo pip install wia
python-photo. snap.py and enter the following code:import cv2
from wia import Wia
import os
import time
input('Hit any key to take a pic...')
vc = cv2.VideoCapture(0)
wia = Wia()
wia.access_token = 'YOUR_DEVICE_SECRET_KEY'
file_name='wia-pic.jpg'
if vc.isOpened(): # try to get the first frame
rval, frame = vc.read()
cv2.imwrite(file_name,frame) # writes image test.bmp to disk
dir_path = os.path.dirname(os.path.realpath(__file__))
result = wia.Event.publish(name='photo', file=open(dir_path + '/' + file_name, 'rb'))
else:
rval = False
wia.access_token to your key and run the program. 
You will now create a Flow that is triggered by photo events:

photo event is created by the sensepi device. Detect Faces service node, the output of which branches off into two Run Function logic nodes; one to output a string "Yes" if the subject is smiling, and one to output a string "No" if the subject isn't smiling. Here's the code for the 'smiling' logic node:if (input.body.faceDetails && input.body.faceDetails.length > 0) {
output.body.isSmiling = input.body.faceDetails[0].smile.value;
if (output.body.isSmiling){
output.process = true;
output.body.data = "Yes";
}else{
output.process = false;
}
} else {
output.process = false;
output.body.data = false;
}
not smiling node is as follows:if (input.body.faceDetails && input.body.faceDetails.length > 0) {
output.body.isSmiling = input.body.faceDetails[0].smile.value;
if (!output.body.isSmiling){
output.process = true;
output.body.data = "No";
}else{
output.process = false;
}
} else {
output.process = false;
output.body.data = false;
}
If the subject is smiling, the 'happy-face' Command is run, triggering the RPi to display a happy emoticon on the SenseHat. If the subject isn't smiling, the 'sad-face' Command is run, displaying a sad emoticon on the SenseHat.
SensePi devices overview page and link it to the photo event as follows:
Text widget and link it to the happy event.iot_week9 directory containing sensehat-wia.pywget http://rpf.io/shfaces -O faces.py
sensehat_wia.py to subscribe to the commands and show the corresponding emoticon by updating the code to the following:from wia import Wia
from sense_hat import SenseHat
import time
from faces import normal, happy, sad
# happy face callback
def on_happy_face(event):
print(":)")
sense.set_pixels(happy)
# sad face callback
def on_sad_face(event):
print(":(")
sense.set_pixels(sad)
sense = SenseHat()
wia = Wia()
wia.access_token = 'd_sk_JFODUgcmYToVZSd7JP8xrt54'
deviceId = 'dev_NEJjk3oa'
wia.Stream.connect()
# Subscribe to happy and sad face commands
wia.Command.subscribe(**{"device": deviceId, "slug": 'happy-face', "func": on_happy_face})
wia.Command.subscribe(**{"device": deviceId, "slug": 'sad-face', "func": on_sad_face})
while True:
temp=round(sense.get_temperature(),2)
press=round(sense.get_pressure(),2)
hum=round(sense.get_humidity(),2)
#publish temp/pressure/hum
wia.Event.publish(name="temperature", data=temp)
wia.Event.publish(name="pressure", data=press)
wia.Event.publish(name="humidity", data=hum)
time.sleep(60)
