huge changes to everything (see below).

Changelog:
- re-enable use in release build.
- remove ftpsrv and untitled from builtin ghdl options, as both packages are available in the appstore.
- add image viewer (png, jpg, bmp)
- add music player (bfstm, bfwav, mp3, wav, ogg)
- add idv3 tag parsing support for mp3.
- add "decyption" of GTA Vice City mp3.
- add usbdvd support for music playback and file browsing.
- add nsz export support (solid, block, ldm).
- add xcz export support (same as above).
- add nro fs proper mount support (romfs, nacp, icon).
- add program nca fs support.
- add bfsar fs support.
- re-write the usb protocol, still wip. replaces tinfoil protocol.
- all threads are now create with pre-emptive support with the proper affinity mask set.
- fix oob crash in libpulsar when a bfwav was opened that had more than 2 channels.
- bump yyjson version.
- bump usbhsfs version.
- disable nvjpg.
- add support for theme music of any supported playback type (bfstm, bfwav, mp3, wav, ogg).
- add support for setting background music.
- add async exit to blocking threads (download, nxlink, ftpsrv) to reduce exit time.
- add support for dumping to pc via usb.
- add null, deflate, zstd hash options, mainly used for benchmarking.
- add sidebar slider (currently unused).
- file_viwer can now be used with any filesystem.
- filebrowser will only ever stat file once. previously it would keep stat'ing until it succeeded.
- disabled themezer due to the api breaking and i am not willing to keep maintaining it.
- disable zlt handling in usbds as it's not needed for my api's because the size is always known.
- remove usbds enums and GetSpeed() as i pr'd it to libnx.
- added support for mounting nca's from any source, including files, memory, nsps, xcis etc.
- split the lru cache into it's own header as it's now used in multiple places (nsz, all mounted options).
- add support for fetching and decrypting es personalised tickets.
- fix es common ticket converting where i forgot to also convert the cert chain as well.
- remove the download default music option.
- improve performance of libpulsar when opening a bfsar by remove the large setvbuf option. instead, use the default 1k buffer and handle large buffers manually in sphaira by using a lru cache (todo: just write my own bfsar parser).
- during app init and exit, load times have been halved as i now load/exit async. timestamps have also been added to measure how long everything takes.
- download now async loads / exits the etag json file to improve init times.
- add custom zip io to dumper to support writing a zip to any dest (such as usb).
- dumper now returns a proper error if the transfer was cancelled by the user.
- fatfs mount now sets the timestamp for files.
- fatfs mount handles folders with the archive bit by reporting them as a file.
- ftpsrv config is async loaded to speed up load times.
- nxlink now tries attempt to connect/accept by handling blocking rather than just bailing out.
- added support for minini floats.
- thread_file_transfer now spawns 3 threads rather than 2, to have the middle thread be a optional processor (mainly used for compressing/decompressing).
- added spinner to progress box, taken from nvg demo.
- progress box disables sleep mode on init.
- add gamecard detection to game menu to detect a refresh.
- handle xci that have the key area prepended.
- change gamecard mount fs to use the xci mount code instead of native fs, that way we can see all the partitions rather than just secure.
- reformat the ghdl entries to show the timestamp first.
- support for exporting saves to pc via usb.
- zip fs now uses lru cache.
This commit is contained in:
ITotalJustice
2025-08-28 23:12:34 +01:00
parent cd6fed6aae
commit f0bdc01156
127 changed files with 14623 additions and 13020 deletions

View File

@@ -7,11 +7,6 @@
namespace sphaira::devoptab::common {
Result BufferedData::Read(void* buf, s64 off, s64 size) {
u64 bytes_read;
return Read(buf, off, size, &bytes_read);
}
// todo: change above function to handle bytes read instead.
Result BufferedData::Read(void *_buffer, s64 file_off, s64 read_size, u64* bytes_read) {
auto dst = static_cast<u8*>(_buffer);
@@ -27,7 +22,7 @@ Result BufferedData::Read(void *_buffer, s64 file_off, s64 read_size, u64* bytes
const auto off = file_off - m_off;
const auto size = std::min<s64>(read_size, m_size - off);
if (size) {
std::memcpy(dst, m_data + off, size);
std::memcpy(dst, m_data.data() + off, size);
read_size -= size;
file_off += size;
@@ -38,7 +33,7 @@ Result BufferedData::Read(void *_buffer, s64 file_off, s64 read_size, u64* bytes
}
if (read_size) {
const auto alloc_size = sizeof(m_data);
const auto alloc_size = std::min<s64>(m_data.size(), capacity - file_off);
m_off = 0;
m_size = 0;
u64 bytes_read;
@@ -56,12 +51,11 @@ Result BufferedData::Read(void *_buffer, s64 file_off, s64 read_size, u64* bytes
const auto max_advance = std::min<u64>(amount, alloc_size);
m_off = file_off - max_advance;
m_size = max_advance;
std::memcpy(m_data, dst - max_advance, max_advance);
std::memcpy(m_data.data(), dst - max_advance, max_advance);
} else {
R_TRY(source->Read(m_data, file_off, alloc_size, &bytes_read));
const auto bytes_read = alloc_size;
R_TRY(source->Read(m_data.data(), file_off, alloc_size, &bytes_read));
const auto max_advance = std::min<u64>(read_size, bytes_read);
std::memcpy(dst, m_data, max_advance);
std::memcpy(dst, m_data.data(), max_advance);
m_off = file_off;
m_size = bytes_read;
@@ -77,6 +71,91 @@ Result BufferedData::Read(void *_buffer, s64 file_off, s64 read_size, u64* bytes
R_SUCCEED();
}
Result LruBufferedData::Read(void *_buffer, s64 file_off, s64 read_size, u64* bytes_read) {
// log_write("[FATFS] read offset: %zu size: %zu\n", file_off, read_size);
auto dst = static_cast<u8*>(_buffer);
size_t amount = 0;
*bytes_read = 0;
R_UNLESS(file_off < capacity, FsError_UnsupportedOperateRangeForFileStorage);
read_size = std::min<s64>(read_size, capacity - file_off);
// fatfs reads in max 16k chunks.
// knowing this, it's possible to detect large file reads by simply checking if
// the read size is 16k (or more, maybe in the further).
// however this would destroy random access performance, such as fetching 512 bytes.
// the fix was to have 2 LRU caches, one for large data and the other for small (anything below 16k).
// the results in file reads 32MB -> 184MB and directory listing is instant.
const auto large_read = read_size >= 1024 * 16;
auto& lru = large_read ? lru_cache[1] : lru_cache[0];
for (auto list = lru.begin(); list; list = list->next) {
const auto& m_buffered = list->data;
if (m_buffered->size) {
// check if we can read this data into the beginning of dst.
if (file_off < m_buffered->off + m_buffered->size && file_off >= m_buffered->off) {
const auto off = file_off - m_buffered->off;
const auto size = std::min<s64>(read_size, m_buffered->size - off);
if (size) {
// log_write("[FAT] cache HIT at: %zu\n", file_off);
std::memcpy(dst, m_buffered->data + off, size);
read_size -= size;
file_off += size;
amount += size;
dst += size;
lru.Update(list);
break;
}
}
}
}
if (read_size) {
// log_write("[FAT] cache miss at: %zu %zu\n", file_off, read_size);
auto alloc_size = large_read ? CACHE_LARGE_ALLOC_SIZE : std::max<u64>(read_size, 512 * 24);
alloc_size = std::min<s64>(alloc_size, capacity - file_off);
u64 bytes_read;
auto m_buffered = lru.GetNextFree();
m_buffered->Allocate(alloc_size);
// if the dst is big enough, read data in place.
if (read_size > alloc_size) {
R_TRY(source->Read(dst, file_off, read_size, &bytes_read));
// R_TRY(fsStorageRead(storage, file_off, dst, read_size));
read_size -= bytes_read;
file_off += bytes_read;
amount += bytes_read;
dst += bytes_read;
// save the last chunk of data to the m_buffered io.
const auto max_advance = std::min<u64>(amount, alloc_size);
m_buffered->off = file_off - max_advance;
m_buffered->size = max_advance;
std::memcpy(m_buffered->data, dst - max_advance, max_advance);
} else {
R_TRY(source->Read(m_buffered->data, file_off, alloc_size, &bytes_read));
// R_TRY(fsStorageRead(storage, file_off, m_buffered->data, alloc_size));
const auto max_advance = std::min<u64>(read_size, bytes_read);
std::memcpy(dst, m_buffered->data, max_advance);
m_buffered->off = file_off;
m_buffered->size = bytes_read;
read_size -= max_advance;
file_off += max_advance;
amount += max_advance;
dst += max_advance;
}
}
*bytes_read = amount;
R_SUCCEED();
}
bool fix_path(const char* str, char* out) {
// log_write("[SAVE] got path: %s\n", str);