Keras Simple Machine Learning

So what is Keras

Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. *Being able to go from idea to result with the least possible delay is key to doing good research.*

Use Keras if you need a deep learning library that:

- Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
- Supports both convolutional networks and recurrent networks, as well as combinations of the two.
- Runs seamlessly on CPU and GPU.

and why use it?

here is what Keras say themselves:

There are countless deep learning frameworks available today. Why use Keras rather than any other? Here are some of the areas in which Keras compares favourably to existing alternatives.

- Keras is an API designed for human beings, not machines. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear and actionable feedback upon user error.
- This makes Keras easy to learn and easy to use. As a Keras user, you are more productive, allowing you to try more ideas than your competition, faster — which in turn helps you win machine learning competitions.
- This ease of use does not come at the cost of reduced flexibility: because Keras integrates with lower-level deep learning languages (in particular TensorFlow), it enables you to implement anything you could have built in the base language. In particular, as tf.keras, the Keras API integrates seamlessly with your TensorFlow workflows.

With over 200,000 individual users as of November 2017, Keras has stronger adoption in both the industry and the research community than any other deep learning framework except TensorFlow itself (and Keras is commonly used in conjunction with TensorFlow).

You are already constantly interacting with features built with Keras — it is in use at Netflix, Uber, Yelp, Instacart, Zocdoc, Square, and many others. It is especially popular among startups that place deep learning at the core of their products.

Keras is also a favourite among deep learning researchers, coming in #2 in terms of mentions in scientific papers uploaded to the preprint server arXiv.org:

Keras has also been adopted by researchers at large scientific organizations, in particular, CERN and NASA.

Your Keras models can be easily deployed across a greater range of platforms than any other deep learning framework:

- On iOS, via Apple’s CoreML (Keras support officially provided by Apple). Here’s a tutorial.
- On Android, via the TensorFlow Android runtime. Example: Not Hotdog app.
- In the browser, via GPU-accelerated JavaScript runtimes such as Keras.js and WebDNN.
- On Google Cloud, via TensorFlow-Serving.
- In a Python webapp backend (such as a Flask app).
- On the JVM, via DL4J model import provided by SkyMind.
- On Raspberry Pi.

Your Keras models can be developed with a range of different deep learning backends. Importantly, any Keras model that only leverages built-in layers will be portable across all these backends: you can train a model with one backend, and load it with another (e.g. for deployment). Available backends include:

- The TensorFlow backend (from Google)
- The CNTK backend (from Microsoft)
- The Theano backend

Amazon is also currently working on developing a MXNet backend for Keras.

As such, your Keras model can be trained on a number of different hardware platforms beyond CPUs:

- NVIDIA GPUs
- Google TPUs, via the TensorFlow backend and Google Cloud
- OpenCL-enabled GPUs, such as those from AMD, via the PlaidML Keras backend

- Keras has built-in support for multi-GPU data parallelism
- Horovod, from Uber, has first-class support for Keras models
- Keras models can be turned into TensorFlow Estimators and trained on clusters of GPUs on Google Cloud
- Keras can be run on Spark via Dist-Keras (from CERN) and Elephas

*Above information was taken from Keras Documentation *

So keras sounds great and you already have some idea about neural networks and just want to start putting this seemingly useless information into practice and start to understand its many possible uses. Well, one of the first things I started with was a tutorial I found online https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/ .

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | import numpy as np from keras.layers import Dense, Activation from keras.models import Sequential import matplotlib.pyplot as plt x = np.arange(500).reshape(-1,1) / 50 y = np.sin(x) model = Sequential() model.add(Dense(40, input_dim=1,activation='relu')) model.add(Dense(12, activation='tanh')) model.add(Dense(6, activation='tanh')) model.add(Dense(3, activation='tanh')) model.add(Dense(1,activation='tanh')) model.compile(loss='mean_squared_error', optimizer='SGD', metrics=['mean_squared_error']) history = model.fit(x, y, epochs=500, batch_size=5, verbose=1) #print(history.history.keys()) predictions = model.predict(x) scores = model.evaluate(x, y, verbose=0) print(history) print("Baseline Error: %.2f%%" % (100-scores[1]*100)) plt.plot(predictions) plt.plot(y) plt.show() plt.plot(history.history['loss']) plt.plot(history.history['mean_squared_error']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() |

*What about when this perfect signal has some noise on it.*

1 | x = np.random.normal(x,0.1) |

We can add some Gaussian noise to the signal using the numpy library as seen above. As the cursory results below show such an addition of noise requires an adjustment of the network to better learn the new relationship.