1, 2, 3, 4, 5 ... 8  Next
[diagram] New Lagless VSYNC ON Algorithm for emulator devs 
Author Message
User avatar

Joined: 2018-03-15 23:09
Posts: 24
 [diagram] New Lagless VSYNC ON Algorithm for emulator devs
Hello,
Introducing myself as the founder of Blur Busters.

EDIT1: A few emulators now have beamracing options (Thanks Calamity, Tony Wilen, Tommy, etc -- we're collaborating here and offline)!

EDIT2: It's cross platform! High performance optional! Low-frameslice-count beamracing still works on laptops, Macs and older PCs, please see newer post 1 and newer post 2 of newer findings. 2-frameslice (screen half) beamracing still has lower input lag than combined "input delay + Hard GPU Sync"

For emulator authors:
I have done successful experiments (to a 0.5ms margin) of a brand new raster synchronization algorithm:
https://www.blurbusters.com/blur-buster ... evelopers/

Basically, a lagless "VSYNC ON" via raster-synchronized redundantly-flipped VSYNC OFF.
Image

Basically I've discovered a way to successfully synchronize the realworld raster to the virtual raster scanline within a ~0.5ms delay (a few scanlines worth of jittering), while looking like perfect VSYNC ON. Basically it's a tearingless VSYNC OFF that does precision raster-timed flips of the same VSYNC OFF framebuffer repeatedly timed based on raster scanline API calls available to Windows -- a 1000-to-2000fps VSYNC OFF full-framebuffer virtualization of a rolling-window multi-scanlines buffer. All using 100% standard Direct3D APIs! I don't know if anyone else has done this before (and failed), but it works (at ~2000fps on current GPUs = 0.5ms to 1.0ms beam race jitter margin)...

The 0.5ms window allows PC performance to seamlessly fluctuate (you can't always be perfectly microsecond exact) so you don't need perfect raster sync (line-for-line), just 0.5ms realworld raster chasebehind virtual raster, on the display scanout.

Same lag as a real console. Same lag as a FPGA "emulator". No GPU framebuffer delays!

I programmed raster interrupts on a Commodore 64 in 6502 machine language in Supermon 64, so I understand rasters well enough to understand the timing-precision needed. Things like multiplying 8 sprites to 16 and 32, and splitscreen scrolling zones. So I researched whether or not raster synchronization between real raster and virtual raster could be possible with enough of a forgiving margin for PC performance jitters. The answer is yes, with a trick.

Virtualized-realworld raster synchronization is now possible on 60Hz displays. With a clever performance-jitter-forgiving multi-scanline rolling-window trick (~0.25ms, ~0.5ms, ~1ms, can be a configurable constant) -- the size of rolling window buffer is a function of the VSYNC OFF framerate you can achieve (2000fps = 0.5ms rolling-window scanlines buffer) -- you don't need to be scanline-exact. That solved the problem -- now you can do realtime raster follower algorithms for virtual-vs-real rasters in emulators to reduce input lag!

For slow computers you may need 2ms or 3ms window sizes. But for fast computers, Administrator, Realtime Priority, Titan/1080Ti, I've determined <0.1ms jitter rasterfollower is possible! Yes, it works on 60Hz arcade CRTs. I am sure people have tried this before, and failed, due to performance errors or a lack of understanding how graphics cards work (and the technical know-how to properly precisely time swapping of full-frame common garden-variety Direct3D VSYNC OFF framebuffers to essentially de-facto simulate a sub-frame rolling-window scanlines buffer) -- but done properly, raster sync works!!!

I don't know if any of the byuu emulators are architectured in a way to be friendly with this level of accuracy, but this is a new algorithm for emulator developers to try -- it will be useful to melee players and other lag-critical retro gaming.

That said, this algorithm may be useful to any emulator developer that currently has a goal of minimizing emulator input lag to the absolute theoretical lowest possible.


2018-03-15 23:13

Joined: 2016-07-25 01:17
Posts: 27
 Re: New method: Syncing emulated rasters with realworld rast
The problem with this method relating to higan is that since higan requires so much horsepower to run, hitting even 900FPS is unmanageable on current hardware. Try running a game in higan and enabling turbo mode to see how fast it will run. My computer (6600K) gets to about 120FPS, and it will be even slower when running special chip games like Yoshi's Island and the Kirby games.

Byuu has written an article worth a read regarding latency in higan that is worth reading: https://byuu.org/articles/latency/

The method you are talking about would be a better fit for faster, less accurate emulators such as SNES9x or for other systems like NES and Game Boy, as those may possibly hit the desired 1000-2000 FPS.

Also do you know if this same method would work in OpenGL and/or Vulkan for us people who don't use Windows?


2018-03-16 21:27
User avatar

Joined: 2018-03-15 23:09
Posts: 24
 Re: New method: Syncing emulated rasters with realworld rast
What you are saying is not applicable here! Zero.

Quote:
The problem with this method relating to higan is that since higan requires so much horsepower to run, hitting even 900FPS is unmanageable on current hardware. Try running a game in higan and enabling turbo mode to see how fast it will run. My computer (6600K) gets to about 120FPS, and it will be even slower when running special chip games like Yoshi's Island and the Kirby games.

The beauty of this is you DO NOT need to run 1000fps to get 1000 page flips VSYNC OFF per second.

It's simply (1000/60) = ~16 duplicate pageflips per emulated frame.

Pseudocode for 1ms scanbehind for each emulator refresh cycle:
Code:
LOOP (until raster reaches bottom of screen)
   Emulate 1 millisecond worth of emulator
   Plot a bunch of scanlines to SAME framebuffer
   Poll realworld raster & busywait until prescribed jitter margin (~0.5ms) to approx sync up with emulated raster
   Execute a VSYNC OFF pageflip on SAME framebuffer (tearline never appears)
ENDLOOP

EDIT: See new diagram

Mathematics are simple. NTSC is 15,625 scanlines per second (15.6KHz horizontal scan rate), so if you're doing a 1ms scanbehind, that's 15 emulator scanlines or 15/240ths of 1080p if your output is 1080p (~68 scanlines). You just make sure you pageflip on time before the realworld raster falls physically vertically below the emulated raster, with enough time margin to finish the next frameslice's worth of emulator scanlines. So, 1000 bufferswaps per second would give you 1ms jitter + 1ms render delay = 2ms lag. 10,000 bufferswaps per second would give you 0.1ms jitter + 0.1ms emulator-plotout delay = 0.2ms lag.

Basically, you're simply adding rasters to the same framebuffer, and repeatedly intermittently delivering the partially rasterplotted frame (could be black near bottom of frame, that blackness will never show up on the realworld display, as long as emulated raster always stays ahead of realworld raster).

You're simply emulating a rolling-window multi-scanlines-buffer via ultra-high-frequency buffer swaps. Since the buffer swaps occurs on already-plotted-out duplicate framebuffer, the tearlines never appear. You're still only doing 60 emulated frames per second.

The processing overhead is not any greater. The limiting factor is simply GPU bandwidth -- it doesn't take long to repeat a pageflip of a duplicate framebuffer. 320x240 framebuffer deliveries to the GPU takes less than 0.001% of a computer's time.

This can still work for higan.

I successfully tested this and my algorithm added only 10% CPU overhead while multiplying pageflips by about 35x-60x. I was doing 6,500 pageflips per second on 2560x1440 duplicate frame buffers in C#. The GPU became more of a bottleneck at this stage, but the CPU was practically waiting for the GPU. C#, using the MonoGame game engine, successfully did this algorithm. Occasional garbage collects (about 1 every 5 seconds) did briefly cause a tearline to appear. You can do better in C++ and assember.

You're still doing the same number of emulator rasters per second.

You're simply bufferswapping on the fly at intervals mid-emulated-raster, at roughly 1/16th screen height increments (for 1000 buffer swaps per second) or 1/160th screen height increments (for 10,000 buffer swaps per second).

I have now updated the article to clarify this.

Guys/Gals/Its/whomevers, I've done high speed camera tests pointing at displays -- like http://www.blurbusters.com/lightboost/video -- and I understand realworld rasters on HDTV signals. They're still top-to-bottom, too. Many gaming monitors have nearly lagless raster-scanout of LCD pixels (with only a GtG-fade lagbehind as seen in that high speed video) so you go lagless with VSYNC OFF. Just look at that high speed video. And, yes, again, I've programmed raster interrupts on a Commodore 64 to multiply sprites from 8 to 16 or 32. I may not be demoscene material, but I've done scrolling zones in a Uridium clone called SpaceZoom that is 100% programmed in SuperMon 64 machine language with all these LDA, ADC, STA, STX, STY instructions, and $0400,X's and the like -- I've been there Back in the Day like you all smart emulator writers too -- so trust me that this overhead is not a big add-on.

It simply extra redundant buffer swaps mid-emulator-refresh-cycle, that's all. diagram. Modern GPUs can do over 10,000 reundant buffer swaps per second (redisplaying the same frame). So doing just 1,000 will only add a few percent overhead to an emulator. (The potentially necessary busywaiting polling the real world raster will be the bigger performance overhead -- but that's simply your emulator software actually *waiting* for the graphics card's raster to reach a target number within the jitter safety margin)

My raster follower algorithm now works on modern machines with only a few percent extra overhead.

EDIT: High performance not necessary. Low-frameslice-count beamracing still works on laptops, Macs and older PCs, see newer post 1 and newer post 2 of newer findings. 2-frameslice (screen half) beamracing still has lower input lag than combined "input delay + Hard GPU Sync"


2018-03-16 23:15
User avatar

Joined: 2018-03-15 23:09
Posts: 24
 Re: New method: Syncing emulated rasters with realworld rast
Part 2 of 2:

Image

(continued from previous page)
Quote:
This diagram was originally created for GSYNC 101 Series as a 144Hz example. However, it illustrates the achievement of a rolling-linebuffer simulated by high-frequency VSYNC OFF full frame buffer swapping to do jitter-tolerant approximate synchronization of emulated raster to real world raster. Done precisely, this keeps emulated raster physically below real world raster, to maintain the perfect lagless VSYNC ON look without artifacts (tearing).

Since the emulator frame rate is low, the frame slice immediately above the new frame slice will simply be a duplicate, and buffer swaps in that region will have no tearline — VSYNC OFF swaps on identical frames (in our case, frame slices above the current physical raster) have NO tearline. So as long as the emulator raster stays below the real world raster (via a tight margin, e.g. 0.5ms or such), tearlines are never physically-scanned out, as the emulator is forever adding new data to the framebuffer, everytime the emulator keeps continually doing a VSYNC OFF buffer swap as new rasters keep getting plotted to the emulated frame buffer.

The distance between tearlines is a function of frametime (and the display signal’s scan rate — i.e. how quickly ScanLine increments at a constant rate). The varying distance between tearlines is the varying computer performance jitter.

Perfect sync (to the exact scanline) is impossible, but we don’t need perfect sync — since we’re doing a scanbehind approach with a specific safety margin of jitter (e.g. 0.1ms, 0.5ms, 1ms, or a specific configured jitter margin). As long as jitter is less than this margin, tearlines never becomes visible.

Perfect 60fps CRT motion, with the VSYNC ON look and feel, but without the input lag.

If you recycle the same partially rasterplotted emulator framebuffer, and pageflip that repeatedly — then to the current raster, it simply looks like you’re doing many duplicate pageflips — the incomplete rasterplotted data ahead of the realworld raster, never becomes visible, and thus, tearing never appears!

Tests show that 10,000fps VSYNC OFF (0.1ms-to-0.2ms scanbehind input lag) is possible on high-end graphics cards on high-end systems, when using low-resolution emulator framebuffers, without slowing down the emulator too much. Even with the faster HLSL frame buffers (fuzzy scanline emulation), >1000 VSYNC OFF buffer swaps per second is still possible (= approx 1ms scanbehind realworld raster versus emulated raster).

GPU overheads are extremely low, and blitting smaller framebuffers (e.g. 320×240, even 1920×1080) to the GPU takes little bandwidth and time. One can keep the GPU busy scaling the image at 10,000fps, while using all of a 3GHz+ CPU to keep doing cycle-accurate emulation.

Again, the beauty is you're still only doing 60 frames per second. You only need to plot a few scanlines between buffer swaps.

You can even keep the rest of the framebuffer COMPLETELY BLACK and that blackness never shows up on the screen, because you're emulating a rolling-window frameslice approximately synchronized to the realworld raster. That's the BEAUTY of this raster-follower approach.

It's metaphorically like classic 8-bit sprite multiplication in the vertical dimension (move sprite Y position after it's already scanned out) -- except you're doing frameslices on the fly while doing ultrahighfrequency VSYNC OFF bufferswaps. If you were an 8-bit programmer that did this, then you very well finally-Eureka-understand what I am trying to say :)

It's defacto a rolling-window frameslice (bunch of scanlines) buffer emulated by ultra-high-framerate fullframebuffer VSYNC OFF. Which is so beautiful: Ability to do this with standard 3D APIs like Direct3D and OpenGL.

Think of this: Draw in Direct3D (whether low level calls, or using an easy engine like MonoGame) a simple white 100x100 pixel white box. Scroll the box downwards in a raster-synchronized way at 1000 frames per second (moving the box downwards approximately 1/16th screen height, every frame). What gets displayed on the screen is a fully solid 100-pixel-wide 1080-pixel-tall rectangle on a 1080p display, even though there was never a single framebuffer that had a continuous full-screen-height rectangle! As long as the scan-out remains within frameslice, and the buffer flip occurs within the frameslice, the height of the 100x100 box is your jitter margin (distance between emulated raster & realworld raster) before the output stops looking like a solid full-screen-height white rectangle. The beauty is, this is done, 100% standard Direct3D or OpenGL APIs.

You'll indeed need OS hooks to a raster-poller on the platform of your choice (e.g. normally you can use Direct3D9)...
....But you can also extrapolate a guessworked raster based on averaging between VSYNC events (e.g. extrapolate scan line number from a vsync heartbeat using time-between-flipintervals similar to what http://www.vsynctester.com already does in pure JavaScript). This assumes you have a way of polling for VSYNC -- more platforms have this ability than the ability to poll for the current realworld raster. So you can still have wrappers to make this all portable (direct raster poll, extrapolated raster, etc). You'll want a offset configuration parameter, since there's often an offsetting effect caused by blanking interval timing flips (e.g. flip on beginning of VBLANK, or flip at end of VBLANK). Obviously, it's much easier if you use RasterStatus.ScanLine which polls the graphics card's raster.

For NTSC emulators (15,625 scanlines per second) -- 1000 buffer swaps per second you only need to plot ~15-16 emulator rasters at a time between buffer swaps. For 10,000 buffer swaps per second you ONLY need to plot ~1-2 emulator rasters to the frame buffer (plus enough copy of previous chunkfuls of scanlines already plotted, for jitter margin). You can even vary the number of scanlines between buffer swaps dynamically if your software is detecting too tight a rolling-window chase margin, and raise/lower the granularity of the buffer swaps in realtime adaptively. Or simply make it a configurable number in a configuration file -- faster computers and realtime priority would go as small as 2 or 3 scanline frameslices, while slower computers would have roughly 1/10th screen-height frame slices.

So, that's not much performance needed.

It's just bus bandwidth to "wastefully" deliver the whole padded framebuffer repetitively to the GPU using standard Direct3D or OpenGL API's. As long as there's a way to poll the raster on the real world platform (or a way of approximately estimating time between VBLANK intervals, if platform only gives you a VSYNC timing heartbeat without a raster register), it works with any standard VSYNC OFF buffer-swapping APIs. Exact-raster precision is unnecessary with my algorithm.

The simplest way is simply rasterplot a few scanlines on top of the previous refresh cycle's framebuffer. To minimize odds of artifacts (e.g. flickering) during performance fluctuations like hard disk accesses -- you can even reuse previous frame's data, (e.g. only keep bottom part of frame black, or even use the previous refresh cycle, to reduce odds of flicker artifacts). In this case, tearing will simply briefly instantaneously appear during those pauses where the realworld raster gets ahead of the emulator raster. This is better than black-flickering effect (you're just rasterplotting-overwritting the previous refresh cycle's frame) but this is altogether unnecessary to prove this concept works.

You're still emulating the same number of pixels per second.
You're still emulating 1:1 emulated CPU.
You're still emulating the same number of emulator rasters
You're still emulating the same number of emulator frames per second.

The only thing that's changed is the number of VSYNC OFF buffer swaps per second (on what is essentially mostly duplicate framebuffers).

If I can multiply buffer swaps by more than 35x using mere C# language using only 10% extra CPU overhead (simply reusing framebuffer & rasterplotting a few more lines between buffer swaps) -- then there certainly is enough CPU headroom to allow Higan to use the raster-follower algorithm.

Previously, this wasn't possible nor accurate enough, but I actually achieve 7,000 pageflips per second in C# on a GeForce 1080 on full-resolution 2560x1440 framebuffers per second doing 60fps material (so it's essentially just buffer swaps on mostly duplicate framebuffers). In MonoGame engine. In a lowly garbage-collected language. I am sure you can do better with C++ or assembler. Or you know, the language many emulators are written in :)

I imagine you can do well over >10,000 buffer swaps per second on duplicate buffers for low-resolution 320x240 frame buffers. If you wrote the emulator at command line in an RTOS with a powerful GPU, you might even be able to achieve 15,300 buffer swaps per second outputting to arcade CRTs -- virtually line-accurate realworld-vs-emulator raster sync (might need 2-line chasebehind, due to scaling fuzz, etc), but I'd recommend a 1ms margin to begin with (15-16 scanlines at NTSC 15.3KHz). Remember, jittering doesn't matter much as long as the realworld raster stays physically vertically below the emulated raster (including scaling). Even my C# test jittered by only ~0.1-0.2ms unless it was garbage-collecting. Again, you can do even better than I did in C, C++ or assembler -- maybe in Admin+Realtime+VxD mode, you can get line-accurate raster sync at 15.3KHz NTSC under Windows, but the beauty is you DON'T need to be line-accurate -- just a jitter margin in a rolling-window margin. See my diagram above.

Image

  • As long as emulated raster stays ahead of real raster, the black part of frame never appears
  • Same number of pixels per second.
  • Still emulating 1:1 emulated CPU.
  • Still emulating the same number of emulator rasters
  • Still emulating the same number of emulator frames per second.
  • It's only extra buffer swaps mid-raster (simulating a rolling-window buffer)

You only need to plot a few scanline in chunkfuls at a time between VSYNC OFF pageflips.

Regardless, I confirm it works if done properly! With only 10% extra CPU (depending on desired granularity -- finer granularity, more overhead).

As long as the emulator is CPU limited instead of GPU limited, this algorithm is implementable even in emulators that consume 80-90% CPU (at lower 1-2ms granularities). If your emulator uses less than 25-50% CPU, this algorithm keeps raster sync at the 100 microsecond timescales (0.1ms-0.2ms lag), it can be made an adjustable value (either automatically or via configuration file). Today, a garden variety i7 on a garden variety $300 GPU, easily does <1ms raster sync, with <10% CPU. The beauty is the jitter margin -- and it's adjustable.

This successfully achieves essentially lagless VSYNC ON -- raster synchronized -- pixels outputting from a computer's output almost in realtime with the emulator raster with only a few-scanline lagbehind (the performance jitter margin).


2018-03-16 23:26
User avatar

Joined: 2014-09-27 09:36
Posts: 617
 Re: [diagram] New Lagless VSYNC ON Algorithm for emulator de
Boy, you're sure trying your hardest to get byuu to notice this. :P

Anyway, in what way is this 'Open Source Raster-Follower Algorithm' open source? I'd sure be interested in checking out this C# implementation you speak of, but I don't think you've shared it anywhere?


2018-03-17 09:36
User avatar

Joined: 2017-11-25 16:43
Posts: 812
 Re: [diagram] New Lagless VSYNC ON Algorithm for emulator de
I'm glad you put the word emulator in quotation marks in relation to an FPGA-based system.


2018-03-17 10:02
User avatar

Joined: 2014-09-27 09:39
Posts: 2949
 Re: [diagram] New Lagless VSYNC ON Algorithm for emulator de
>Anyway, in what way is this 'Open Source Raster-Follower Algorithm' open source?

It just seems to be a specific way to chase the scanline, so...

Also, definitely won't work as intended when playing 60fps content on a 144hz display.


2018-03-17 13:27
User avatar

Joined: 2018-03-15 23:09
Posts: 24
 Re: [diagram] New Lagless VSYNC ON Algorithm for emulator de
wareya wrote:
Also, definitely won't work as intended when playing 60fps content on a 144hz display.

Many 144Hz displays do slowscan at 60Hz, so you can decrease refresh rate to get raster sync.
The right tool for the right job, obviously. I acknowledge that GSYNC/FreeSync monitors are among the best things to happen to emulator input lag.

Achieving the original buttonfeel in fighting games is an important hidden factor for game preservationists, so there are some cons of the fast-scan approach, especially as the top edge and bottom edge lag symmetry is different from a 60 Hz display, so it won't always achieve the same buttonfeel (if buttonreads are read mid-scanout, rather than during VBI).

Blur Busters historically have measured input lag via high speed cameras, and I can confirm this technique is the closest thing possible to true realtime streaming of pixels from GPU to the output in very sub-refresh lag, while still using industry standard APIs (Direct3D in this case).

The WinUAE author (who I communicate with) is working to experiment with this at the moment, as he was one of the first to implement software BFI which I demoed in 2013 at http://www.testufo.com/blackframes (Works best on 120Hz displays than 60Hz).

I'm communicating (some privately) to other emu devs to find those interested in implementations of real time beam racing synchronizations between emulated raster and real world raster.

P.S. One tricky challenge is that you need to begin plotting the emulator raster before RasterStatus.ScanLine begins incrementing. (Frustratingly, it stays at a fixed value during VBI). The VBI length can be estimated by timing the length of time of RasterStatus.InVBlank (in microseconds). This is always microsecond exact, and this becomes your starting pistol for beam racing the emulator raster. Alternatively, use WinAPI QueryDisplayConfig() to get the exact horizontal scan rate AND the Vertical Total and subtract vertical resolution from it to get the size of VBI in number of scanlines. With these two numbers (VBI size divided by horizontal scan rate), you can get the exact VBI time to the sub-microsecond on any graphics card.

Beam racing works fine with scaling (e.g. #540 of 1080p corresponding to #120 of 240p), and if you want border effects, just scale a little differently, the important thing is that the emulated raster is physically below the real raster, however you decide to map-out the emulator layout (HLSL effects, raster fuzz effects, border effects, whatnot). There may be slight divergences in lag linearity if the VBI-to-active ratio of emulator versus realworld is different, but typically this is hundred-microsecond-scale stuff. And besides, you can use CRU (Custom Resolution Utility) to make sure that the VBI is a 480:525 ratio on whatever higher resolution you want (e.g. VT1181 for 1080p, which many newer LCD monitors will sync to -- since 1080:1181 is almost identical to 480:525 ratio of NTSC for VBI time to active time) -- that is, if you indeed an exact-ratio VBI (but that's cherrypicking microseconds, at this stage -- but at least you have that option). Regardless, the beam chasing algorithm is VBI-size-independent, any VBI-time-ratio differences only introduces minor vertical lag gradient nonlinearities (in hundreds microsecond timescales) between emu "signal" and real signal.

So, there you go -- two standard Windows API ways to calculate the length of time of VBI, to time beginning plotting the emu raster into the front buffer ahead of the realworld raster.

The virtual reality people are already doing it -- https://www.imgtec.com/blog/reducing-la ... rendering/ -- but I'm able to get the strips to just a few scanlines tall (if using C++ and realtime priority in Admin, single-scanline looks doable for NTSC with a 2-scanline jitter margin for <0.2ms input lag).

It is too bad we're currently stuck with ultra-high-buffer-swap-rate VSYNC OFF rather than doing front-buffer rendering -- that would be easier for bream racing -- (but front buffer rendering is frustratingly not easy to do with current graphics APIs). But it's quite neat we can blit full framebuffers to GPU in less than the time interval of an NTSC scanline! So we already have a stand-in for front buffer rendering using purely VSYNC OFF double buffer rendering. How far we have come...

Nontheless.... realtime beam racing is finally practical!

Keep tuned for open source implementations of real time beam racing hitting the community soon. It'll be an optional option in at least one emulator, so keep tuned.


2018-03-17 23:20
User avatar

Joined: 2018-03-15 23:09
Posts: 24
 Re: [diagram] New Lagless VSYNC ON Algorithm for emulator de
Update: Calamity has implemented this beam chasing algorithm into a MAME experiment:https://forums.blurbusters.com/viewtopi ... 750#p31750


2018-03-18 00:00
User avatar

Joined: 2014-09-27 09:23
Posts: 2193
Location: Germany
 Re: [diagram] New Lagless VSYNC ON Algorithm for emulator de
This is only for lag reduction, right? Emulators for consoles with non-standard frame rates on fixed-rate displays would still need to slow down / speed up, or skip / duplicate frames...

_________________
My setup:
Super Famicom ("2/1/3" SNS-CPU-GPM-02) → Multi Out to SCART cable → EuroSCART to Mini cable → Framemeister (with Firebrandx' profiles) → AVerMedia Live Gamer Extreme capture unit → RECentral 4 viewing/recording software


2018-03-18 11:10
1, 2, 3, 4, 5 ... 8  Next