Color or monochrome
Color sensor
Both monochrome and color cameras work with a monochrome chip. The silicon chip converts photons into electrons and cannot distinguish which wavelength the photon has. To ensure that a color camera can generate colored images, a so-called Bayer-matrix is placed on the chip.
Cburnett, CC BY-SA 3.0, via Wikimedia Commons
This allows only a certain wavelength range (red, green or blue) to pass through each pixel.
Cburnett, CC BY-SA 3.0, via Wikimedia Commons
Four pixels share one color channel under such a Bayer-matrix. A commonly used Bayer-matrix is the RGGB matrix. In this case, one red, two green and one blue pixel share the color channel. Two green pixels are usually used because the human eye is most sensitive in this range.
In the human eye, the green component in gray tones makes the greatest contribution to brightness perception and therefore also to contrast and sharpness perception. (https://de.wikipedia.org/wiki/Bayer-Sensor)
Splitting the image into four pixels reduces the overall sensitivity of the camera chip, and compared to a monochrome chip, the image becomes darker for the same exposure time. Various algorithms and interpolations are then used to generate a color image from the information. However, it is essential to inform the software what kind of Bayer-matrix was used, otherwise the images will be incorrectly colored.
The use of narrowband filters in color cameras is a bit difficult due to the division into color channels. Only one wavelength range is allowed to pass, and between the pixels of one color there are pixels that have not been exposed. In the meantime, there are Duo Narrowband filters especially for color cameras, which allow two wavelength ranges (e.g. 500 nm and 656 nm) to pass. At 500 nm the blue and green pixels are exposed and at 656 nm the red pixels.
These captures can be separated via software so that the different wavelength ranges can be better processed and then combined with the RGB capture. In this way, a more detailed image is achieved.
Color sensors are also called "single shot" RGB sensors, because all information is contained on only one sub-frame.
Monochrome sensor
With a monochrome sensor, the color separation of the Bayer-matrix is no longer necessary and the chip can use its full potential. However, in order to obtain a color image even when using a monochrome sensor, several series of images must be created. The first series contains the luminance image. This reflects the brightness, in which all information is usually imaged without a filter. It is a black and white image, but it is not known which color information is in which pixel.
This requires three more recording sessions, in which a red, green and blue filter is inserted into the beam path in each case. These filters excite the pixels only for this corresponding wavelength range, and a color assignment is possible in the subsequent image processing.
If narrowband exposures are desired, additional captures must then be made to these four exposure series, or the narrowband exposures are assigned to the desired color channels.
Conclusion:
Even if the exposure times of the individual exposure series can be shorter due to the high sensitivity, so that in the end a similar exposure time of a color camera is achieved, the workload for monochrome astro cameras is considerably higher. The result, however, are images rich in contrast and detail, which are very difficult to achieve with a color camera.
In order to simplify the entry into astrophotography, it is advisable for beginners to use a color camera (astro camera or DSLR) for the moment, because of the easier handling. With the technology of today's color cameras very satisfying images can be created.