Navigation

Showing posts with label video. Show all posts
Showing posts with label video. Show all posts

April 7, 2012

LumoLabs: Nikon D800 video function demystified

Nikon D800 FX mode 1080p video frame (click for original size)
The Nikon D800 full frame SLR camera has created a lot of buzz recently. Some would call it hype. While it is clear that its 36 MP still resolution is pretty much unparalled in the 35mm camera class, the final verdict about its video subsystem is still out. Esp. in comparison with Canon's 5DmkIII.

One point of interest has been how either camera actually creates its video frames. I now had a chance to apply LumoLabs' testing methology to a loaner D800 camera and figure it out for 1080p video in FX mode. I am having a look at live view performance too.

You may jump to the conclusion at the end if you just want to read what we found, igoring how we did it :)


Nikon D800 FX mode FullHD 1080p video

The title image shows one frame from a 1080p video taken with the Nikon D800 (in FX mode, it supports a number of crop video modes too). It shows a zone plate test chart which can be used to perform a sampling error frequency analysis.

Please, read falklumo.blogspot.de/2009/10/lumolab-welcome-and-testing-methodology.html to learn more about the testing methodology incl. access to the original of the test chart allowing everybody to replicate my analysis.

There is a bit of (gray colored) moiré from the printing process. This is because scaling and printing of zone plates is a non-trivial art in itself ;) You can actually measure the printer's native resolution by inspecting the printed zone plate chart. Below, you find a photograph of the print (in 14.6 MP resolution) allowing you to determine what moiré patterns are from the printing process actually.

Printed zone plate chart (still shot with a 14.6 MP camera, for reference)

However, all colorful moiré patterns are artefacts introduced by the D800 video system. It allows us to precisely measure how it works. Let's have a close look at the one of the two center discs:

Analyzed region of interest in the D800 video frame

The big discs are constructed such that the 1080p Nyquist frequency emerges at its outer circle. The two center discs have their edge at twice this Nyquist frequency and the four tiny discs at four times this frequency. Therefore, the false color moiré disc emerges at (149px/258px x2) or 1.155x the 1080p Nyquist frequency (1247 px). This means that the Nikon D800 samples ~1247 horizontal lines from its sensor.

Now, let's make a back-of-the envelope calculation:

An FX frame in video mode is taken from a 6720 x 3780 px region (which actually is a 1.095x crop from the full 7360 x 4912 px frame (this information is from the Nikon user guide, translating physical dimensions into pixels). Because 3780 / 1247 = 3.03 and because 1% is our measurement error, we have proof that the Nikon D800 samples every third horizontal line from its sensor.

A second result is that the ever so slightly color moiré for horizontal frequencies disappears at the Nyquist frequency. The D800's AA filter is effective here, the remaining moiré is from the printing. The D800E would have a bit of additional color moiré here, but by far not as strong as in the vertical direction. So, I believe that the Nikon D800 samples every vertical row from its sensor.

Below is what I believe how Nikon implemented line skipping:

Likely D800 sensel sampling matrix

and here is a slightly more symmetrical scheme which I cannot entirely exclude although I think it isn't used in this mode:
Unlikely sensel sampling matrix
If you look at the likely sensel sampling matrix, you'll see that all sensels which are read out (the ones with a color) result in a new RGGB Bayer matrix of sensels. Which has the advantage that a standard demosaicing algorithm is applicable to create an RGB frame.

This is similiar to what the Canon 5DmkII did actually. However, there is one important aspect where the D800 is different:

A native 1080p video frame is 6720 x 1260 px, demosaiced to a 2240 x 1260 px RGB frame.

And the final 1080p video frame is further downsampled 7:6 to 1960 x 1080 px which gives the D800 a slight edge in resolution and edge flicker behaviour over a 5DmkII.


High ISO noise in video

What we found has one important consequence: High ISO noise in video! Because of the FX video crop and skipping two thirds of sensels, the ISO performance in video is shifted by a factor 3.60. E.g., At ISO 12,800, the noise looks (as bad) as at ISO 46,000 from a camera using all available sensors for video (except for the 16:9 ratio crop of course).

You may note however, that the D800 still samples 6720 x 1260 sensels for a 1920 x 1080 frame or 4.08 sensels per pixel. For this reason, at ISO 12,800, the noise looks (as good) as at ISO 3,200 from a still image when pixel peeping at a 100% (1:1) level. So, pixel noise in D800 video is 2 stops less compared to still while it could have been 3.85 stops less when reading out a maximum of sensels. If you consider this bad or good is up to you.

Below, I have extracted frames from the ISO comparison performed by crisislab.com:

Video noise comparison D800 vs. 5DmkIII -- original frames (c) 2012 crisislab.com
On the left hand stripe, I have shifted the D800 samples two stips down and I think, it is a good match for the 5DmkIII performance then.

From that, I can already conclude that the 5DmkIII reads out all its sensels, i.e., does no line skipping. However, I didn't run a resolution analysis for the 5DmkIII. However, hearing about resolution complaints for 5DmkIII video, I think they bin pixels before read out. This improves noise and aliasing performance but unlike downsampling, doesn't help the resolution.


Nikon D800 Live View implementation notes

I have applied our testing methodology to Nikon's live view implementation too.

D800 live view, photograph of the rear LCD (no zoom level)
You see the same false color moiré discs which we have analyzed already. Of course, there is some strong additional moiré from the LCD rasterization. I.e., the D800 only reads every third line when activating live view (in the example, it is FX video live view).

If we zoom in, we get a result as follows.

D800 live view, photograph of the rear LCD (high zoom level)
You now different false color moiré disc, they have moved outwards. The sampling frequency is  (1692px/1935px x2) or 1.749x the 1080p Nyquist frequency (1889 px). Because 3780 / 1889 = 2.00, we have proof that the Nikon D800 samples every second horizontal line from its sensor when zooming enough in live view.

In live view, the D800 switches from third line to second line skipping when zooming in!

Lessons for manual focusing: (1) zoom in and (2) focus onto vertical structures which have twice the resolution in live view! Focus on trees, edges of buildings rather than horizon or roof top.


Conclusion

The D800 creates FX 1080p video in the following way:
  1. Crop a region of 6720 x 3780 sensels (crop factor 1.095).
  2. Read only every third line out of this region, but all sensels in a line. The result is an 6720 x 1260 sensel RGGB Bayer pattern which can be demosaiced.
  3. The resulting 2240 x 1260 RGB image is downsampled 7:6 to the final 1920 x 1080 px resolution.
  4. Compared to an optimum architecture, only 1/3.6 of sensels are read which makes the D800 loose up to 1.8 stops in high ISO video performance.
  5. When zooming into a live view image, the D800 switches line skipping from 3x to 2x.
  6. Manual forcus should use zoomed live view focusing vertical edges.
Overall, I am personally pleased with the implementation Nikon has chosen. It refines an idea originally used in the 5DmkII which is more difficult to implement due to the higher overall number of pixels. Because of downsampling from 1260p to 1080p, I actually expect slightly better resolution than from a 5DmkII or a camera which bins sensels prior to demosaicing.

On the other hand, there will be no more excuses for line skipping in the future. Not after Nokia got rid of it in their 41 MP 808 mobile phone ...


Enjoy your read :)
Falk

October 3, 2011

Wiesn Girls in dirndl

A girl at Oktoberfest 2011 München (aka Wiesn). Wiesn girl #1

A trademark of Wiesn (the original Oktoberfest) are girls wearing dirndl for showing off their (sometimes remarkable) décolletage also known as cleavage. Dirndl are a traditional dress in Bavaria. It used to be worn by poorer working girls in the past. But more recently, it became rather popular at Oktoberfest because of its figure-hugging qualities. It even appears that the dirndl currently makes new friends all over the world. A fashion trend made in Munich.

So, all of this theoretically provides for ample of photo opportunities like in a giant fashion show. Except that Wiesn is a very crowded party zone and nobody would want to make it easy for you. On the other hand, taking photos is a tolerated sports as long as they are serious and don't show up in the internet without permission (which is why I omitted the face of people I have no permission from). Nevertheless, good décolletage photos are rare (except from prominent people posing to be photographed) and I made a test of what can be achieved.

Equipment wise, I used a Pentax K-5 with the DA* 60-250mm/4 using AF and a Pentax K-x with a DA 15mm/4. I fixed the latter to f/8 and 1.4m manual focus with SR turned off to have sub-second reaction time to take photos in the crowd. No photo bag to keep a low profile. This turned out to be quite a capable combination.

Actually, the girl in the opening image is taken from a video I took with the K-5 at 250mm using manual focus while she walked up the staircase to the Bavaria monument at Wiesn. The manual focus was difficult to maintain (and is not perfect). But I still like the result as a photo I may not have been able to shoot as a still.

Below is this staircase in the background.
Bavaria monument

Below, I'll show impressions from Wiesn 2011. Note that many images are heavily cropped. Also, some post-processing was applied to reduce some hard shadows from the sunny day.

Wiesn girl #2


Wiesn girl #3 (note the cleavage)

Wiesn girls #739.347 to #968.567

Wiesn girl #4 ("Awaiting him while being looked at")

Wiesn girl #5

Wiesn girl #6

Wiesn girl #7

Wiesn girl #8

I'd like to know from any comments if you like the fashion trend at Oktoberfest. It definitely provides for some nice eye catchers. If you have any favourites in this mini series, post a comment. I'd like to know your taste too :)

Thanks for stopping bye and I hope you enjoyed your stay.


Falk

June 24, 2009

K-7 as a movie camera -- PART III: Sample video

-> Link to part I

After all the preparation, I would like to share a short and uninspired video with you. And some thoughts about post processing ...

This is a short video sequence shot at 1536 x 1024 with a Pentax K-7 during an oldtimer car meeting earlier this year near Munich.



Pentax K-7 movie in 1080p from falconeye on Vimeo.


This video is nothing special. But I will use it to illustrate


Options for post-processing


1. The form factor.


720p video is 1280x720, 1080p video is 1920x1080, both in 16:9 aspect ratio. The native video capture is in 1536x1024 or 3:2 aspect ratio.


  • 1536x1024 -> 720p: magnify by 83.33%, crop 8.00% from upper and lower edge each.

  • 1536x1024 -> 1080p: magnify by 125%, crop 7.81% (100px) from upper and lower edge each.

There is no benefit in recording in 720p directly, except for a better control of framing (16:9 framing on rear display) and smaller file sizes. For 720p, the camera does the same supersampling to 83% size one would do in post-processing otherwise. On the other side, keeping the 1536x1024 material provides some more options for post-processing.

1536x1024 material is not suitable for direct presentation. Here, 720p is a better option.



2. Recoding.


Most video editing software will directly open the AVI file stored on SD card. Additionally, Apple Quicktime will open the file and Quicktime Pro allows to extract individual or all frames as images. Or to recode the movie, e.g., to MP4 AVC H.264.

Photoshop CS3/CS4 can open the AVI file directly as well and you get a video layer. There, you can do many image operations like you do for still images and recode, e.g., again to MP4.

After touching up the raw material, I used Adobe Premiere Elements for more video-oriented editing. On the Apple, MacOSX has something similiar on board, called iMovie.



3. Quality.


The K-7 captures stunning video quality. But it isn't good enough to justify the extra size coming with 1080p, compared to 720p. At least not without further touching up the quality.

Below, you'll see two frame images from the short clip above. The first is as out of the camera.



The second image is post-processed using my K-7 video IQ master (a program which I developed to defeat the "768-aliasing artifact"). The K-7 video IQ master is work in progress and yet unpublished. The basic idea is to use the insight about the submatrix as described in part I and try to correct some of the weaknesses in the original algorithm as built into the camera.



You may have to click onto the images to see the original size to study the difference. According to several opinions, the filtered footage has less fringing and less jaggy lines while still gaining (or at least maintaining) on overall detail (read: without becoming softer). It was used to create the 1080p clip in the opening of the article.

I hope that a forthcoming version of the K-7 video IQ master will be good enough to render 1080p footage to the same stunning quality we now see in 720p footage. And further improve on 720p quality. Btw, I very much welcome any comments on this topic.

As a side note. It is no problem to add motion blur to frames belonging to a panning action. Or tilt to remove the skew. Just open the corresponding video sequence in CS3.



Conclusion:


This shall conclude my three part article about the video capabilities of the Pentax K-7.

The video mode in the Pentax K-7 looks like a very promising proposition. It can produce stunning video quality, esp. in 720p. However, Pentax made a number of trade-offs to keep the camera affordable as a still camera (and it is a high spec camera already w/o video). Therefore, to achieve maximum possible quality in movies, a number of tweaks applied during capture and in post-processing are of help.



Enjoy your moving images :)

Further reading:

June 23, 2009

K-7 as a movie camera -- PART II: Controlling video recording


In part I, we have discussed the technical implementation of the video feature in the Pentax K-7.

Now, let's see how to make good use of it. Again, I'm not going to repeat the specification or user's guide.

At first glance, it seems to be very straight-forward: Turn the mode dial to movie and press the shutter! Ok, done this, been there. Now, for the more serious fun: how do we control shooting parameters?

The official answer by Pentax is: you don't! There is almost no manual over shooting parameters. But as always, there are back doors :)

Things to know:

  1. You can set global parameters like pixel format, quality, shake reduction in the "video" menu (press the menu button when in video mode) and is not affected by all the nice settings you may have tweaked. E.g., forget about your Auto-ISO range ;)

  2. Autofocus, aperture and E/V-compensation can be set prior to a recording. All three are defunct while recording. Of course, the camera won't complain if you change aperture or focus manually (read: mechanically). The focus in video mode is contrast autofocus by default. But you can switch to phase autofocus which is faster.

  3. The exposure (with all its parameters) can be locked/unlocked before and during a recording, using the AE-L button.

  4. EV values in video and still image modes seem to be the same.



Controlling exposure


Knowing the video exposure response curve is key in determinig and manually controlling an exposure in a K-7 video. It isn't published by Pentax but I researched them to be this:

Assuming an aperture is preset to a given value:





















































































































































































































































Light value [EV] at f/ShutterSensitivity
1.42.02.84.05.68.011162232    [s][ISO]
0123456789underexposureblng red
123456789101/303200
2345678910111/301600
34567891011121/30800
456789101112131/30400
5678910111213141/30200
67891011121314151/30100
789101112131415161/60100
8910111213141516171/125100
91011121314151617181/250100
101112131415161718191/500100
111213141516171819201/1000100
121314151617181920211/2000100
131415161718192021221/4000100
14151617181920212223overexposureblng red

How to read the table: Look up the aperture in the upper left (e.g., f/5.6, marked in bold), search the measured light value in the corresponding column (e.g., EV 10) and look up shutter speed and ISO in the same row (e.g., 1/30s, ISO 100).


I didn't research much how aperture would be controlled if set to AUTO. But it will choose a combination from the diagonal line of constant EV in the table above. In AUTO, the aperture is controlled live and you can hear the aperture blades moving! IMHO, this is a cool feature for a dSLR with a legacy SLR lens!

(Update) I had another look at this.

In video recording with aperture set to AUTO, I illuminated the sensor with a torch and observed the lens aperture's reaction. This is what happened:

  • Without extra light: wide open (f/2.4)
  • With medium extra light (torch into lens): shut down (f/5.6) (*)
  • With full extra light (torch fully aligned and directly in front of lens): closed down (f/11) (*)

(*) estimated from diameter left open by aperture blades, as seen thru the front lens element.

There are really only these 3 steps. With a f/5.6 lens, this reduces to really only two steps. If it changes aperture, it does so by jumping 2EV, at least. Of course, it causes a visible jump in brightness in the video then being compensated afterwards. Therefore, manually shifting aperture creates a much smoother effect. (end of update)




Note: The above table may not be fully accurate. E.g., the sweet spot shutter speed (1/30s) may be shorter, like 1/50s, actually. Note that many videographers prefer a shutter speed of 1/30s or 1/50s (i.e., the motion blur from it) for 30fps footage to minimize a stuttering effect in panning action. Also note that all this is from my own research. Pentax doesn't disclose the information given above and the recorded video contains no useful meta information.

Now, using the response curve above, you can meter any subject, set the required aperture, use E/V-compensation to hit the required EV value, switch to movie mode (E/V compensation stays active!), lock exposure with AE-L and you successfully manually controlled your video parameters!

It is possible. But I agree that it is awkward in many if not most circumstances. If you need longer shutter times at daylight without wanting to stop down then you need to use a gray filter like you maybe would use for water still photography. Note that for video, you can stop fully down without loosing sharpness in the resulting video.


However, what I did find very easy to use is the following trick: In video mode, before starting to record, I observe the live histogram on the rear display when pointing to different subjects which will emerge during a scene. Then I point to a "typical" histogram and lock exposure. Eventually, I record using these parameters.



Controlling focus


Autofocus stays inactive during recording (it was enabled up to firmware release 0.20). However, it is so slow and badly implemented that it wouldn't be useful in actual footage anyway. If you need to refocus using the autofocus, the fastest would be to set live view autofocus to phase detect and stop a recording (press the shutter again and wait ~2s), press the AF button (~1s) and start recording again (~1s). If there is no time for a ~5s break, then you must control focus manually. Unfortunately, magnifying live view is inactive during recording as well (and the view finder stays dark, of course).

Fortunately, the quality of the rear screen with its 640x480 resolution allows an approximate focus. If the DoF effect isn't required, then shooting with fixed focus at hyperfocal distance is a viable option. The DoF calculations as obtained for an APS-C sensor do still apply. If you need to pixel peep, set circle of confusion to 0.015 mm.

If you need more control over the focus, using either a field monitor or an enlarging eye piece looking at the rear screen may be an option. The K-7 outputs live view and life audio via HDMI in 480p, 576p, 720p or 1080p. The image at the top of this article shows a K-7 hooked up to an HDMI 1.3 type C cable. However, the video data will always be 480p only, possibly enlarged to match the HDMI protocol.





K-7 videographing its own live view from falconeye on Vimeo.



The video above shows how the live feed from the K-7 behaves. In particular, you can assess the latency between reality and HDMI output. I guess it is about 1/3 s.

There are field monitors one can connect via HDMI and mount to the hot shoe or flash bar. The flash bar however, may be the better choice if a microphone is already mounted to the hot shoe ;) Because the feed is always 480p only, there would be no additional benefit in getting a 800x600 or 720p field monitor (this notice may not hold true for the Samsung GX30). A 5-6" 480p field monitor with HDMI input will do it. Avoid monitors with A/V analog input only.

Field monitors may have a headphone jack for audio playback as well.

And of course, for proper framing and smooth operation, you'll definitely want a video rig :)


Controlling shake


The K-7 features electro-mechanical sensor stabilization which proves very efficient in video capture. Note however that it is designed to work for still photography. So, it cannot compensate all the shake during a longer take. Because wide angle requires the anti shake to compensate less and is more stable in the first place, it is best to use wide angle (and tele lenses on a tripod). Or a rig again ;)

Note that a wide angle lens has a closer hyperfocal distance and produces smoother panning as well. So, I just adopted wider lenses as my standard when videographing.


Controlling audio


The built-in mono microphone is very sensitive to environmental noise like wind. So, using an external (stereo) microphone is a much better option (or use external sound recording and a take board). I tried the RØDE Stereo VideoMic connected to the flash hot shoe and it produces excellent results. The recorded quality is definitely more than sufficient for voice and sound. For music one may want to record externally.

Btw, there is no control of volume. But the recording level is relatively low and I didn't have any problems with either oversteer or noise floor.



Now, let's go out and have fun with video. I.e., it is time for part III.

-> Continue to part III

June 22, 2009

K-7 as a movie camera -- PART I: The technical foundation

One of the more exciting features of the K-7 is its ability to record video in HD.

I will not repeat the specifications here. But it has an excellent 720p@30Hz recording mode and a 1536x1024p@30Hz mode which is almost full HD and can be used to create FullHD footage. In practice, it writes MJPEG at about 50 MBit/s and therefore can often outperform video recorded in AVCHD.

Many would think that Nikon D90 was the first dSLR sporting HD video. But this is only true 50%. The earlier Pentax K20D already included the ability to record video at 1024p@21Hz, although limited to clips of 5.6s each. So, the video funtion in the K-7 is regarded as already being version 2 by many Pentaxians.

My article about video will come in three parts:

  1. The technical foundation (this part)

  2. Controlling video recording in practice

  3. Samples videos



PART I: The technical foundation


I am the kind of person who needs to look under the hood. Driving is fun. But seeing and grabbing the engine underneath is fun too ;) Read what I've found out.

First of all, video from a dSLR isn't a trivial thing to implement. Of course, one could read out all 14.6 raw million pixels 30 times a second (or at least 24) and construct the corresponding video frames, ideally by supersampling pixels into the smaller size, and compressing to a video codec. But this implies a tremendous processing load (i.e., processing 6.53 GBit/s raw input data in real time!). Implementing it this way (and I think Canon does it this way in their 5DmkII) makes the camera significantly more expensive than would have been required by still photography alone.

Pentax choose to implement it in a way which doesn't increase cost at all. The sensor of the K20D can be read out at a rate of 750+ MBit/s, or 375+ MBit/s per each of its two channels. This is exactly enough to support 3.3 fps at full resolution. In order to support 5.2 fps with the K-7 (seemingly 6.0 fps in the Samsung GX30), Pentax/Samsung doubled the number of channels to four and the K-7 sensor can be read out at a rate of 1.5 GBit/s. As staggering as this figure may seem, it is still only ~20% of what would be required following the approach above. It would have been somewhat easier with a 6 MPixel camera though ;)

Therefore, Pentax created a special subsampling mode where only every 6th raw pixel value is read out from the sensor at 30 Hz. This special signal is always used to create live view, zoomed live view and HD video frames, for all of Pentax K20D (21 Hz), Pentax K-7, Samsung GX20 (21 Hz), Samsung GX30 and Samsung NX' electronic view finder.

Because an HD video has only 2 MPixels (or less) which is about only 1/6th of pixels of a 16:9 still image, this means that at the same ISO step, pixel noise from a video frame and from a still image will look very similiar. Of course, using only every 6th pixel looses 2.5 EV stops in low light performance compared to ambitious supersampling approach. However, even 1/6th of the surface of an APS-C sensor is still a lot larger than the combined surface in dedicated 3 chip HD camcorders. So, videos in low light will look good. It is just that they could look even better.



The subsampling matrix


How are colors reconstructed if only every 6th pixel is read? This is a problem because standard demosaicing techniques as known from a Bayer matrix don't apply. Rather, color is constructed from picking values out of the Bayer matrix with a given color filter. Below is one variant of possible locations where color values are picked from:


(note: if you click onto the image, you can find another variant which most likely would yield better results.)

As you can see, the submatrix forms a pattern repeating every 6x6 pixels. Only 6 pixel values are picked from such a 6x6 area, producing 2 RGB pixels, e.g., as outlined by the black boundaries.

To tell the truth, Pentax applies a little bit of demosaicing magic and produces 4 RGB pixels from the information of 6 raw pixels. However, it does it in a rather bad way leading to the (arti)fact that two horizontally adjacent RGB pixels have very similiar values, almost reducing the advertized resolution of 1536x1024 down to 768x1024. In part three, however, we will see that part of the information is still there. We'll call this effect the "768-aliasing artifact".

Another problem with the subsampling matrix is that colors are spatially translated, i.e., the green channel sits left of the red+blue (=purple) channels within the subsampling matrix. This leads to green fringe where contrast changes from bright to dark (left to right), and purple fringe where contrast changes from dark to bright. You can see the effect in the following image:



(400% crop of a video frame, left from the K-7, right from the K20D)


You can see the fringe effect which is exactly as wide as the 6x3 subsampling matrix (2 pixels in horizontal direction). Also, K20D and K-7 share almost the same subsampling matrix, with a bit less of the "768-aliasing artifact" in the K-7.

There is a simple experiment that demosaicing of the subsampling matrix is very rudimentary indeed: Pierce a tiny hole into a black cardboard, position the K-7 on a tripod ~10 m away and zoom into the hole image using 10x zoomed live view. With a sharp lens, at most a single raw pixel in the subsampling matrix is hit only. What you see on the rear screen, is an image of the hole which is either dim or bright, in an arbitrary color, depending on minor movements of the camera. The hole is dim if a raw pixel which doesn't take part in the subsampling matrix is hit.


So, to summarize, some fringe (false colors) and jaggy edges are artifacts resulting from the way video frames are extracted from the sensor. Note that the effects are much less visible in 720p which is supersampled from the 1526x1024 feed. The K-7's 720p video quality is stunning.

You may look at the 1536x1024 recording quality as being the "raw image equivalent" for 720p final images. Needing post-processing but retaining extra headroom.




Rolling shutter


Another common problem with video in dSLRs is the rolling shutter. Because the mechanical shutter is too slow for 30 fps, it stays open all the time and an electronic shutter is used. In the K-7, the sensor is read-out line by line in progressive order, bottom edge first, within a period of ~1/30s. Because the image is projected head down, the lines at the top of a final image have been read-out earlier than the lines further down. If the camera is panned, e.g., left to right, then vertical lines become slightly skewed, e.g., tilted anti-clockwise. Therefore, this effect is called skewing effect or jello effect as well.

The jello effect does actually bend non-horizontal straight lines if they are rotating with respect to the camera (so, avoid tilting the camera when recording). It can produce funny images with rotating structures, like propellers, actually :)

As for the K-7, this effect is very well controlled. The skew is exact and without extra jagginess. I.e., the read-out is uninterrupted. Very good! If combined with a bit of motion blur, it becomes almost invisible. I would say that the rolling shutter is almost a non-issue with the K-7. Combine this with the lack of motion-compression artifacts and a gray filter maybe, and the K-7 can produce really stunning panning action.



Video compression codec


Another strong point is the selection of high bitrate MJPEG as the compression codec. Not being an amateur's first choice, it offers greater headroom for post-processing. Individual frames are JPEG-compressed and about 200 KBytes large. The JPEG compression artifacts are visible on larger than 100% inspection but don't disturb. Also, MJPEG is an easy format for post-production. And, it doesn't cause extra burden on the in-camera processing engine.

The container is "Motion JPEG OpenDML AVI" and can be opened on all platforms, e.g., using Apple Quicktime. Of course, for long-term archival, MJPEG needs to be recoded (e.g., to MP4 AVC) to save space.



Audio


The K-7 has a built-in mono microphone capturing a clear sound in the absence of any wind.

It does have a plug for a stereo microphone too. I tried the RØDE Stereo VideoMic connected to the flash hot shoe and it produces excellent results.

It plays back audio over the built-in speaker which is really bad, though ;) For decent audio in the field, one would require an HDMI-capable field monitor with a headphone jack. Yes, I checked that audio is played back via HDMI.

The audio quality seems to be good too. The format is 1 or 2 channels of 16 Bit, 32 kHz PCM.



Temperature


After 5 - 30 minutes of continued video recording, a red thermometer pops up on the rear screen and informs about increasing temperature. I didn't have any recording stopped by temperature alert. But it may be a concern in very hot regions, full sun shine and for extended scene recording.

(Update) I've test-driven continuous video recording at 27°C ambient temperature for 40 minutes. The red thermometer appeared after about 15 minutes. But the camera didn't interrupt. The camera felt warm and burned one out of three battery marks.

A still image shot at the end was ISO 800 and 40°C. It shows a very faint vertical line in the middle, about as pronounced as the ISO800 noise and only visible against a uniform background at 100%, invisible in normal photography. Even invisible against the uniform background are additional vertical lines exactly 256 pixels apart. I don't consider this hot temperature banding to be significant considering it is from a pre-production sensor.

Other observations: right after movie recording with temperature alert, the camera refuses to enter LV but still enters movie mode and continues recording movies (or allows still images using the viewfinder). Only 20s after movie recording, it accepts LV again, incl. the red thermometer. After 2 minutes, the red thermometer has disappeared.

I think at below 30°C ambient temperature, it won't emergency-break a movie recording. (end of update)


This shall conclude Part I. Next will be a discussion about making the K-7 video feature more accessible for real projects.

-> Continue to part II