United States Copyright Office Website
While you are visiting the United States Copyright Office Web site, explore various links, such as those that provide general copyright or legislative information. In the Legislation section, you might be particularly interested in reading about the legislation to keep pace with advances in digital media and commerce (such as the Digital Millennium Copyright Act) and what it means for copyright protection. You might also want to read the information given in the General Information section. If you decide to register your Web site, you can access the necessary forms and fee information in the forms section of the site.
What is copyright?
Copyright is a form of protection grounded in the U.S. Constitution and granted by law for original works of authorship fixed in a tangible medium of expression. Copyright covers both published and unpublished works.
What does Copyright protect?
Copyright, a form of intellectual property law, protects original works of authorship including literary, dramatic, musical, and artistic works, such as poetry, novels, movies, songs, computer software, and architecture. Copyright does not protect facts, ideas, systems, or methods of operation, although it may protect the way these things are expressed.
Generating Synthetic Examples
As computers become faster, the other way of putting in knowledge, which is by generating synthetic examples, begins to look better.
Generating synthetic examples allows optimization to discover clever ways of using the multilayer network that we did not think of. In fact, we might never fully understand how the multilayer network does it. If we just want good solutions to a problem, generating synthetic examples may be appropriate. Using the idea of synthetic data, there is a brute force approach to handwritten digit recognition.
Lenet5 uses knowledge about invariances to design the connectivity and the weight sharing and the pooling that achieves about 80 errors. Adding additional techniques including synthetic data, Ranzato 2008 was able to get the results down to about 40 errors.
A group in Switzerland led by Juergen Schmidhuber implemented this on a large scale with injecting knowledge by putting in synthetic data.
They worked on creating instructive synthetic data. For every real training case, they transformed it to make more training examples.
They then trained a large neural net with many units per layer using many layers on a graphic processor unit.
The graphics processor unit gave them a factor of thirteen computation, and because of all the synthetic data they put in, it did not overfit.
If they just use a large neural net with a GPU, it would have been a disaster that would have over fitted terribly, that would have performed fine on the training data and performed terribly on the test data.