Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
Use Keras if you need a deep learning library that:
Allows for easy and fast prototyping (through user friendliness, modularity, and extensibility).
Supports both convolutional networks and recurrent networks, as well as combinations of the two.
Runs seamlessly on CPU and GPU.
and why use it?
here is what Keras say themselves:
There are countless deep learning frameworks available today. Why use Keras rather than any other? Here are some of the areas in which Keras compares favourably to existing alternatives.
Keras prioritizes developer experience
Keras is an API designed for human beings, not machines. Keras follows best practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes the number of user actions required for common use cases, and it provides clear and actionable feedback upon user error.
This ease of use does not come at the cost of reduced flexibility: because Keras integrates with lower-level deep learning languages (in particular TensorFlow), it enables you to implement anything you could have built in the base language. In particular, as tf.keras, the Keras API integrates seamlessly with your TensorFlow workflows.
Keras has broad adoption in the industry and the research community
With over 200,000 individual users as of November 2017, Keras has stronger adoption in both the industry and the research community than any other deep learning framework except TensorFlow itself (and Keras is commonly used in conjunction with TensorFlow).
You are already constantly interacting with features built with Keras — it is in use at Netflix, Uber, Yelp, Instacart, Zocdoc, Square, and many others. It is especially popular among startups that place deep learning at the core of their products.
Keras is also a favourite among deep learning researchers, coming in #2 in terms of mentions in scientific papers uploaded to the preprint server arXiv.org:
Keras has also been adopted by researchers at large scientific organizations, in particular, CERN and NASA.
Keras makes it easy to turn models into products
Your Keras models can be easily deployed across a greater range of platforms than any other deep learning framework:
Keras supports multiple backend engines and does not lock you into one ecosystem
Your Keras models can be developed with a range of different deep learning backends. Importantly, any Keras model that only leverages built-in layers will be portable across all these backends: you can train a model with one backend, and load it with another (e.g. for deployment). Available backends include:
The TensorFlow backend (from Google)
The CNTK backend (from Microsoft)
The Theano backend
Amazon is also currently working on developing a MXNet backend for Keras.
As such, your Keras model can be trained on a number of different hardware platforms beyond CPUs:
import numpy as np from keras.layersimport Dense, Activation from keras.modelsimport Sequential import matplotlib.pyplotas plt
x = np.arange(500).reshape(-1,1) / 50
y = np.sin(x)
model = Sequential()
model.compile(loss='mean_squared_error', optimizer='SGD', metrics=['mean_squared_error'])
history = model.fit(x, y, epochs=500, batch_size=5, verbose=1) #print(history.history.keys())
predictions = model.predict(x)
scores = model.evaluate(x, y, verbose=0) print(history) print("Baseline Error: %.2f%%" % (100-scores*100))
plt.legend(['train','test'], loc='upper left')
What about when this perfect signal has some noise on it.
x = np.random.normal(x,0.1)
We can add some Gaussian noise to the signal using the numpy library as seen above. As the cursory results below show such an addition of noise requires an adjustment of the network to better learn the new relationship.