The term "distributed ray tracing" was originally coined by Robert Cook in this 1984 paper. His observation was that in order to perform anti-aliasing in a ray-tracer, the renderer needs to perform spatial upsampling - that is, to take more samples (i.e. shoot more rays) than the number of pixels in the image and combine their results. One way to do this is to shoot multiple rays within a pixel and average their color values, for example. However, if the renderer is already tracing multiple rays per pixel anyway to obtain an anti-aliased image, then these rays can also be "distributed" among additional dimensions than just the pixel position to sample effects that could not be captured by a single ray. The important bit is that this comes without any additional cost on top of spatial upsampling, since you're already tracing those additional rays anyway. For example, if you shoot multiple rays within your pixel to compute an anti-aliased result, you can get motion blur absolutely for free if you also use a different time value for each ray (or soft shadows if they connect to a different point on the light source, or depth of field if they use a different starting point on the aperture, etc.).
Monte Carlo ray tracing is a term that is slightly ambiguous. In most cases, it refers to rendering techniques that solve the rendering equation, introduced by Jim Kajiya in 1986, using Monte Carlo integration. Practically all modern rendering techniques that solve the rendering equation, such as path tracing, bidirectional path tracing, progressive photon mapping and VCM, can be classified as Monte Carlo ray tracing techniques. The idea of Monte Carlo integration is that we can compute the integral of any function by randomly choosing points in the integration domain and averaging the value of the function at these points. At a high level, in Monte Carlo ray tracing we can use this technique to integrate the amount of light arriving at the camera within a pixel in order to compute the pixel value. For example, a path tracer does this by randomly picking a point within the pixel to shoot the first ray, and then continues to randomly pick a direction to continue on the surface it lands on, and so forth. We could also randomly pick a position on the time axis if we want to do motion blur, or randomly pick a point on the aperture if wanted to do depth of field, or...
If this sounds very similar to distributed ray tracing, that's because it is! We can think of distributed ray tracing as a very informal description of a Monte Carlo algorithm that samples certain effects like soft shadows. Cook's paper lacks the mathematical framework to really reason about it properly, but you could certainly implement distributed ray tracing using a simple Monte Carlo renderer. It's worth noting that distributed ray tracing lacks any description of global illumination effects, which are naturally modeled in the rendering equation (it should be mentioned that Kajiya's paper was published two years after Cook's paper).
আপনি মন্টে কার্লো রশ্মি ট্রেসিংকে বিতরণকৃত রশ্মির ট্রেসিংয়ের আরও সাধারণ সংস্করণ হিসাবে ভাবতে পারেন। মন্টি কার্লো রে ট্রেসিংয়ে একটি সাধারণ গাণিতিক কাঠামো রয়েছে যা আপনাকে বিতরণকৃত রশ্মির ট্রেসিং পেপারে উল্লিখিতগুলি সহ কার্যত কোনও প্রভাব পরিচালনা করতে দেয়।
এই দিনগুলিতে, "বিতরণ করা রে ট্রেসিং" আসলে এমন কোনও শব্দ নয় যা মূল অ্যালগরিদমকে বোঝাতে ব্যবহৃত হয়। প্রায়শই আপনি এটি "বিতরণ প্রভাবগুলির" সাথে একত্রে শুনতে পাবেন, যা কেবল গতি অস্পষ্টতা, ক্ষেত্রের গভীরতা বা নরম ছায়াগুলির মতো প্রভাব যা একক নমুনা রেট্রেসার দিয়ে পরিচালনা করা যায় না।