So I want to gather climate data and make some predictions on my own using a variety of factors and an Ardunio nano through assembler. That requires storing data I collect and ensuring that it can stream and is accountable. Which normalization level do I use?
- Only reduces horizontal redundancy so no.
- Only reduces vertical redundancy so no.
- Closer. Everything relates to the key. BNCF is even closer since the key explains everything with all candidate keys separated.
- Splits out multiple redundancies and further reduces data. So weather data can be separated by sensor or snow-water equivalent by area and layer.
- Accounts for more business-like rules. Is this overdoing it? It is semantic. Do I know enough to use it?
- Takes over all of the set of related values with a join. It is good for temporal data.
My data is meant to persist once it is inserted. It must be separated for easy mathematical calculations. Finally, it deals with nature, so relationships should probably not be rule defined. I n particular, it deals with a side of nature that no one really knows much about. I want to preserve all possible relationships. Therefore, 5 NF is a bit much.
I do need to relate things to keys so I can grab by specific area, day, weather, type of phenomena; whatever else I need. I also need to separate attributes into easy to grab attributes with an appropriate impact. The goal is prediction and calculation.
I am going to use 4 NF. Look back for more on this project.