Hello,
Thank you for using ASE, we really appreciate it. Unfortunately, Depth Fade is not ideal under orthographic setups.
To give a bit of a clarification on what's happening under the hood, the Screen Depth node is fetching depth values directly from the depth buffer. These values, when working on a perspective projection are in a logarithmic scale. This is done to have better precision on values closer to the camera that the ones furthest away. On the ASE side, when fetching those values, we need to transform them into a linear space so they can be used by other nodes ( which do linear operations ).
When using an Orthographic camera, the depth values are instead stored on the depth buffer as linear depth values. This is because in orthographic projection you always see the objects at the same size independently on the distance they have from the camera, so there's no reason to have better accuracy on closer objects.
In order to approximate the effect you will have to disable "Convert To Linear" in the
Screen Depth node and adjust your camera Far/Near Clip values. I attached a simple example for quick reference, do keep in mind that, as mentioned above, the results we'll be slightly different as Depth Fade is not ideal for orthographic cameras.
Alternatively, depending on your current requirements, consider using Fog or simple position based transparency. Do note that Transparency is not a perfect solution, it really depends on what you intend to create.
Looking forward to your feedback.